How to catch test failure in Selenium Webdriver using JUnit4? - selenium

I need to write the testcase Pass/Fail status in an Excel report. How do I catch the result in a simple way? In Ant XML/HTML reports status (Success/Failed) is displayed so I believe there must be some way to catch this..may be in the #After Class.
Can anyone help me out with this?
I am using Selenium Webdriver with JUnit4 and using Apache POI(oh its screwing my mind too!) for excel handling. Let me know in case you need more info.
Thanks :)
P.S: As I am asking a lot of questions it would be great if someone can change or suggest me changes to make these questions and threads helpful for others.

I got your problem and this is what I have do in all my test cases now -
In your test case if you want to check if 2 Strings are equal -
You might be using this code -
Assert.assertTrue(value1.equals(value2));
And if these values are not equal an AssertionError is generated.
Instead you can change your code like this -
String testCaseStatus = "";
if(value1.equals(value2)){
testCaseStatus = "success";
}
else{
testCaseStatus = "fail";
}
Now you store this result in your excel sheet by passing the testCaseStatus to your code which adds a line to your excel which you have implemented using Apache POI. You can also handle the conditions of "error" if you implement a try catch block.
For example you can catch some exception and add the status as error to your excel sheet.
Edited part of answer -
I just figured out on how to use the TestResult class -
This is some sample code I'm posting -
This is the test case class called ExampleTest -
public class ExampleTest implements junit.framework.Test {
#Before
public void setUp() throws Exception {
}
#After
public void tearDown() throws Exception {
}
#Test
public void test(TestResult result) {
try{
Assert.assertEquals("hari", "");
}catch(AssertionFailedError e){
result.addFailure(this, e);
}catch(AssertionError e){
result.addError(this, e);
}
}
#Override
public int countTestCases() {
// TODO Auto-generated method stub
return 0;
}
#Override
public void run(TestResult result) {
test(result);
}
}
I call the above test case from this code -
public class Test {
public static void main(String[] args) {
TestResult result = new TestResult();
TestSuite suite = new TestSuite();
suite.addTest(new ExampleTest());
suite.run(result);
System.out.println(result.errorCount());
}
}
You can call many test cases by just adding them to this suite and then get the entire result using the TestResult class failures() and errors() methods.
You can read more on this from here.

Related

How to print the number of failures for every test case using SoftAssert in TestNG

I have a test case that performs the following:
#Test
public void testNumber() {
SoftAssert softAssert = new SoftAssert();
List<Integer> nums = Arrays.asList(1,2,3,4,5,6,7,8,9,10);
for (Integer num : nums){
softAssert.assertTrue(num%2==0, String.format("\n Old num : %d", num);
}
softAssert.assertAll();
}
The above test will fail for numbers 1,3,5,7,9
Five statements will be printed in the test report.
If I run the test for a larger data set, I find it difficult to get the count of test data that failed the test case.
Is there any easier way to get the number of test data that failed for a test case using softAssert itself ?
Easy way to find count of test data that failed the test case in SoftAssert itself is not possible.
Alternatively you can think of extending Assertion class. Below is the sample that gives count of test data that failed the test case and prints each failure in new line.
package yourpackage;
import java.util.Map;
import org.testng.asserts.Assertion;
import org.testng.asserts.IAssert;
import org.testng.collections.Maps;
public class AssertionExtn extends Assertion {
private final Map<AssertionError, IAssert> m_errors;
public AssertionExtn(){
this.m_errors = Maps.newLinkedHashMap();
}
protected void doAssert(IAssert a) {
onBeforeAssert(a);
try {
a.doAssert();
onAssertSuccess(a);
} catch (AssertionError ex) {
onAssertFailure(a, ex);
this.m_errors.put(ex, a);
} finally {
onAfterAssert(a);
}
}
public void assertAll() {
if (!(this.m_errors.isEmpty())) {
StringBuilder sb = new StringBuilder("Total assertion failed: "+m_errors.size());
sb.append("\n\t");
sb.append("The following asserts failed:");
boolean first = true;
for (Map.Entry ae : this.m_errors.entrySet()) {
if (first)
first = false;
else {
sb.append(",");
}
sb.append("\n\t");
sb.append(((AssertionError)ae.getKey()).getMessage());
}
throw new AssertionError(sb.toString());
}
}
}
You can use this as:
#Test
public void test()
{
AssertionExtn asert=new AssertionExtn();
asert.assertEquals(false, true,"failed");
asert.assertEquals(false, false,"passed");
asert.assertEquals(0, 1,"brokedown");
asert.assertAll();
}

How can I have #After run even if a cucumber step failed?

We have several cucumber step definitions that are modifying the database which would mess up the test afterwards if it doesn't get cleaned up after the test runs. We do this by having a function with the #After annotation that will clean things up.
The problem is that if there's a failure in one of the tests, the function with #After doesn't run, which leaves the database in a bad state.
So the question is, how can I make sure the function with #After always runs, regardless if a test failed or not?
I saw this question, but it's not exactly what I'm trying to do, and the answers don't help.
If it helps, here is part of one of the tests. It's been greatly stripped down, but it has what I think are the important parts.
import static org.hamcrest.MatcherAssert.assertThat;
import cucumber.api.java.After;
public class RunMacroGMUStepDefinition
{
#Autowired
protected ClientSOAPRecordkeeperInterface keeper;
#Given( "^the following Macro exists:$" )
#Transactional
public void establishDefaultPatron( final DataTable dataTable )
{
for ( final DataTableRow dataTableRow : dataTable.getGherkinRows() )
{
// Stuff happens here
keeper.insert( macroScriptRecord );
}
}
#After( value = "#RunMacroGMU" )
#Transactional
public void teardown()
{
for ( int i = 0; i < macroScripts.size(); i++ )
{
keeper.delete( macroScripts.get( i ) );
}
}
// Part of #Then
private void compareRecords( final String has, // Other stuff )
{
// Stuff happens here
if ( has.equals( "include" ) )
{
assertThat( "No matching data found", foundMatch, equalTo( true ) );
}
else
{
assertThat( "Found matching data", foundMatch, equalTo( false ) );
}
}
}
I personally use Behat (The PHP dist of Cucumber), and we use something like this to take screenshots after a failed test. Did a bit of searching, and found this snippet in Java, that may help with this situation.
#After
public void tearDown(Scenario scenario) {
if (scenario.isFailed()) {
(INSERT FUNCTIONS YOU WOULD LIKE TO RUN AFTER A FAILING TEST HERE)
}
driver.close();
}
I hope this helps.

Cucumber JVM: Test if the correct exception is thrown

How to test that the correct exception is thrown when using Cucumber JVM? When using JUnit, I would do something like this:
#Test(expected = NullPointerException.class)
public void testExceptionThrown(){
taskCreater.createTask(null);
}
As you can see, this is very elegant. But how can I achieve the same elegance, when using cucumber JVM? My test looks like this right now:
#Then("the user gets a Null pointer exception$")
public void null_exception_thrown() {
boolean result = false;
try {
taskCreater.createTask(null);
} catch (NullPointerException e) {
result = true;
}
assertTrue(result);
}
Note the need for a try..catch followed by an assertTrue on a flag.
Testing the not-happy-path can be hard. Here's a nice way that I've found to do it with cucumber.
Scenario: Doing something illegal should land you in jail
Then a failure is expected
When you attempt something illegal
And it fails.
OK, don't shoot me because I put the Then before the When, I just think it reads better but you don't have to do that.
I store my excepions in a (cucumber-scoped) world object, but you could also do it in your step file, but this will limit you later.
public class MyWorld {
private boolean expectException;
private List<RuntimeException> exceptions = new ArrayList<>();
public void expectException() {
expectException = true;
}
public void add(RuntimeException e) {
if (!expectException) {
throw e;
}
exceptions.add(e);
}
public List<RuntimeException> getExceptions() {
return exceptions;
}
}
Your steps are then pretty simple:
#Then("a failure is expected")
public void a_failure_is_expected() {
myWorld.expectException();
}
In a step where you are (at least sometimes) expecting an exception, catch it and add it to the world.
#When("you attempt something illegal")
public void you_attempt_something_illegal() {
try {
myService.doSomethingBad();
} catch (RuntimeException e) {
world.add(e);
}
}
Now you can check whether the exception was recorded in the world.
#And("it fails")
public void it_fails() {
assertThat(world.getExceptions(), is(not(empty()));
}
The most valuable thing about this approach is that it won't swallow an exception when you don't expect it.
Have you tried using the junit #Rule annotation with ExpectedException, like this:
#Rule
public ExpectedException expectedEx = ExpectedException.none();
#Then("the user gets a Null pointer exception$")
public void null_exception_thrown() {
expectedEx.expect(NullPointerException.class);
//expectedEx.expectMessage("the message");
taskCreater.createTask(null);
}
Behaviour testing verifies if your project follows specifications, and I doubt the user expects a NullPointerException while using the system.
In my opinion (I'm not familiar with your project), exceptions should only be checked during Unit Tests, as they correspond to an unexpected error or user mistake.
It's very unusual to check for exceptions during behaviour tests. If an exception is thrown during a test, it should fail.
for instance:
test.feature
Given that I have a file "users.txt"
And I try to import users /* If an Exception is thrown here, the test fails */
Then I should see the following users: /* list of users */
In my unit test, I would have:
#Test(expected = MyException.class)
public void importUsersShouldThrowMyExceptionWhenTheFileIsNotFound() {
// mock a call to a file and throw an exception
Mockito.when(reader.readFile("file.txt").thenThrow(new FileNotFoundException());
importer.importUsers();
}
I suggest you to use org.assertj.assertj-core.
Thanks to Assertions class, you could simplify your assertion like bellow:
#Then("the user gets a Null pointer exception$")
public void null_exception_thrown() {
Assertions.assertThatThrownBy(() -> taskCreater.createTask(null)).
isInstanceOf(NullPointerException.class);
}
Never used Cucumber, but would
public void null_exception_thrown() {
try {
taskCreater.createTask(null);
fail("Null Pointer Expected");
} catch (NullPointerException e) {
// Do Nothing
}
}
work for you?
I believe Cucumber is meant to test higher level acceptance tests than lower level unit tests as in that case we will be testing the structure opposed to the behavior which is not the desired usage of the Cucumber framework.
i used to have expected exception name in feature steps. for a calculator test example
Here is my feature file entry for division exception :
Scenario: Dividing a number with ZERO
Given I want to test calculator
When I insert 5.5 and 0.0 for division
Then I got Exception ArithmeticException
So, you may see, I have added the exception name. So, in step definition , i get the name of the exception.
Now, we need to get the exception when dividing by ZERO and put in a variable.
And get this variable to compare its class name(exception name)
so, in step definition where i divide by zero
private Exception actualException;
#When("^I insert (.+) and (.+)for division$")
public void iInsertAndForDivision(double arg0, double arg1) throws Throwable {
try {
result = calculator.div(arg0, arg1);
} catch (Exception e) {
actualException =e;
}
}
And i need the step definition for validating the exception
#Then("^I got Exception (.+)")
public void iGotArithmeticException(String aException) throws Throwable {
Assert.assertEquals("Exception MissMatch",aException,actualException.getClass().getSimpleName());
}
The complete project can be seen here : Steps for cucumber

TestNG Test Case failing with JMockit "Invalid context for the recording of expectations"

The following TestNG (6.3) test case generates the error "Invalid context for the recording of expectations"
#Listeners({ Initializer.class })
public final class ClassUnderTestTest {
private ClassUnderTest cut;
#SuppressWarnings("unused")
#BeforeMethod
private void initialise() {
cut = new ClassUnderTest();
}
#Test
public void doSomething() {
new Expectations() {
MockedClass tmc;
{
tmc.doMethod("Hello"); result = "Hello";
}
};
String result = cut.doSomething();
assertEquals(result, "Hello");
}
}
The class under test is below.
public class ClassUnderTest {
MockedClass service = new MockedClass();
MockedInterface ifce = new MockedInterfaceImpl();
public String doSomething() {
return (String) service.doMethod("Hello");
}
public String doSomethingElse() {
return (String) ifce.testMethod("Hello again");
}
}
I am making the assumption that because I am using the #Listeners annotation that I do not require the javaagent command line argument. This assumption may be wrong....
Can anyone point out what I have missed?
The JMockit-TestNG Initializer must run once for the whole test run, so using #Listeners on individual test classes won't work.
Instead, simply upgrade to JMockit 0.999.11, which works transparently with TestNG 6.2+, without any need to specify a listener or the -javaagent parameter (unless running on JDK 1.5).

Grails integration testsuite suite

We have a set of integration test which depend upon same set of static data. Since the amount of data is huge we dont want to set it up per test level. Is it possible to setup data at the start, run group of test and rollback the data at the end of test.
What we effectively want is the rollback at test suite level rather than test case level. We are using grails 1.3.1, any pointers would be highly helpful for us to proceed. Thanks in advance.
-Prakash
for one test case you could use:
#BeforeClass
public static void setUpBeforeClass() throws Exception {
}
#AfterClass
public static void tearDownAfterClass() throws Exception {
}
haven't tried a suite of test cases (yet).
i did have some trouble using findByName in the static methods and had to resort to saving an id and using get.
i did try rolling up a suite, but no joy, getting a: no runnable methods.
You can take control of the transaction/rollback behaviour by marking your test case as non-transactional and managing data, transactions and rollbacks yourself. Example:
class SomeTests extends GrailsUnitTestCase {
static transactional = false
static boolean testDataGenerated = false
protected void setUp() {
if (!testDataGenerated) {
generateTestData()
testDataGenerated = true
}
}
void testSomething() {
...test...
}
void testSomethingTransactionally() {
DomainObject.withTransaction {
...test...
}
}
void testSomethingTransactionallyWithRollback() {
DomainObject.withTransaction { status ->
...test...
status.setRollbackOnly()
}
}
}