Webdriver - Check for exceptions in log files - selenium

I have one requirement as follows
- When my #Test method executes, check the log files.
- If there any exception in log files, fail the test case. Else pass the test case
Currently, I have done following implementation
- Clearing the log files (3-4 log files) in #Beforetest code
- Checking exceptions in all log files in #AfterTestCode
But issue is that, when any #Test method pass/fail, control marks that test case execution status as PASS/FAIL and after this althoug there is any exception in my log file, my TC passes.
So can you please suggest me if any workarounds possible for that.
Vishal

Checking exception in the #AfterMethod will not help because it checks the result of the #Test method.
For example :
#Test
Public void testCase(){
}
#AfterMethod
public void tearDown(ITestResult result){
}
In the above sample result is for the #test method class result. If test case is passing it will understand pass in #AfterMethod as well.
Workaround:
Either check in your #Test method and based on that your AfterMethod will work fine considering the fact that #AfterMethod will execute after every test method class.
Create a #AfterClass Method which will check on all test cases whether they are passed or not at the end of the class.

Related

Testng get group names without the BeforeMethod

I know using this reflection we can surely get list of all groups of the #test.
#BeforeMethod
public void befrMethod(Method met){
Test t = met.getAnnotation(Test.class);
System.out.println(t.groups());
}
But Is there a way to get groups list while inside the #test and not beforemethod?
Because I am running tests in parallel and this method is not working well for me.
Test method supports ITestContext dependency injection
And this class has a method called
getIncludedGroups java.lang.String[] getIncludedGroups() Returns: All
the groups that are included for this test run.
"ITestContext (testng 7.3.0 API)" https://javadoc.io/doc/org.testng/testng/latest/org/testng/ITestContext.html
So try
#Test
public void befrMethod(ITestContext met){

Ignore a junit test with parametrized test

I run parametrized test with Junit, but I want to skip some tests following a flag. The parameters are stored in a CSV file and when a flag is "off" I want to skip this test. How can I do this?
You can use the Assume class.
#Test
public void something() throws Exception {
Assume.assumeFalse(valueFromCsv.equals("off"));
}
The test will be skipped by JUnit if the assumption is not fulfilled.

TestNG - run tests in order impossible scenario?

I've tried many ways with no success and I'm starting to believe this is not achievable in TestNG and I just like to confirm with you.
I have web service I'm testing and I need to run few basic scenarios.
My current test methods, each with #Test annotation (each need to be testable as a single test):
dbResetTest
clearCacheTest
openURLTest
loginTest
actionXTest
actionYTest
I also need to run these scenarios consisting from above tests run IN ORDER:
Test login feature (openURLTest -> dbResetTest -> clearCacheTest -> loginTest)
Test X after login (openURLTest -> dbResetTest -> clearCacheTest -> loginTest -> actionXTest)
Test Y after clearing cache (clearCacheTest -> actionYTest)
The issue is, if I made tests from point 1 & 2 dependant on others I won't be able to run scenario 3 because clearCacheTest does not depend on any other in this particular scenario. I've tried running those test in order through xml and by using dependencies but with no success.
Of course I could make actionYTest to call clearCacheTest directly but then if clearCacheTest fails the report will show that actionYTest was the failing one which is what I try to avoid.
I'm pretty sure now what I need is not achievable in TestNG but maybe I'm wrong...
I think you should change your tactics slightly. Instead of perceiving these ~(dbResetTest, etc.) as test Classes you should make them test methods instead and use dependsOnMethods programatically (not from XML) instead of dependsOnGroups. Then you will be able to implement your required logic rather easily (every test is unique --> #Test annotation, every test is executed in certain priority --> use priority parameter). Then the 1,2,3 tests should be your test classes. So here it is how you do it:
public class LoginFeature {
#Test (priority=1)
public openURLTest(){
}
#Test (priority=2, dependsOnMethods="openURLTest")
public dbResetTest (){
}
#Test (priority=3, dependsOnMethods="dbResetTest")
public clearCacheTest (){
}
#Test (priority=4, dependsOnMethods="clearCacheTest" )
public loginTest(){
}
}
This way if something fails in between your tests the rest of scenarios will automatically be skipped and you won't need to call directly clearCacheTest.
Hope this helps!
Update
After OP's comment
Well again I think you kinda have of a design issue. For your methods to be called multiple times they need to sit somewhere that they are accessible. You are almost there with your approach but not quite. So here is how you can call the methods; multiple times and run them every time from scratch (I'll show you the code first and then explain in detail):
parent class
public class TestBase{
//include here all your important methods *without* #Test Annotation
public void dbReset(){
//perform db reset
}
public void clearCache(){
//clear browser cache
}
public boolean openURL(){
//try to open test URL
return didIreachTestURLSuccessfully;
}
}
child class
public class loginFeature extends TestBase{
#Test (priority=1)
public void attemptToResetDataBase(){
dbReset();
}
#Test (priority=2, dependsOnMeth0ds="attemptToResetDataBase")
public void clearCacheTest(){
clearCache();
}
#Test (priority=3, dependsOnMeth0ds="clearCacheTest")
public void verifySuccessfulLogin(){
login();
}
}
So, you include all of your test methods in a parent class, called TestBase. Then you create your test (loginTest for example) with a class that extends TestBase. Now you can call your methods multiple times, treat them each time as an individual test and connect them with dependencies according to your needs (i.e I have each one of them depending on the prior method; but you could rearrange them and put them all to depend on one, or to no-one).
Because your test class inherits from TestBase you don't even need to create an object to access the internal methods; you can call them directly instead.
Hope this solves it for you, do not hesitate to write a comment if you need more info.

NullpointerException when using multiple #Before in Cucumber-jvm

I am using cucumber-jvm.
I have an init method to initialize all the necessary stuff, such as the browser dimensions, application url etc.
I have put this init method under a #Before (cucumber.api) tag.
#Before
public void initLoginPage() throws Exception {
getBrowserDimension();
setBrowserCapabilities();
init(getApplicationUrl());
}
My life was fine with this running smoothly.
Now, I also wanted to use #Before for some tags at scenario levels.
Say my scenario looks like:
#myTag
When I do blah
Then I should get blah-blah
And I wanted to use something like:
#Before(#myTag)
public void beforeScenario(){
blah = true;
}
But the moment I give it another #Before, it starts giving a NullPointerException. I tracked it back to the runBeforeHooks and runHookIfTagsMatch methods in Cucumber's Runtime class.
They are throwing the exception for the #Before (for initLoginPage()) itself.
Is there a conflict getting created with multiple #Before's?
How can I resolve this?
I found the solution to get this working.
The problem was that any of the #Before codes were getting picked up in a random order. It wasn't based on the assumption that a #Before without parameters will be executed before #Before("myTag").
So the trick is to assign order parameter (in #Before) some value. The default order that gets assigned to #Before is 10000. So, if we define the order value explicitly, it should work.
So basically, my code for initializer could look like:
#Before(order=1)
public void initLoginPage() throws Exception {
getBrowserDimension();
setBrowserCapabilities();
init(getApplicationUrl());
}
That solved my problem

Test Case Structure

I am working on a Selenium project and have certain doubts in coverting a Manual Test case into a Selenium Test Script.
Assume I have 2 Test case as follows,
First case:
1. NAvigate to GMAIL
2. Login to Gmail with valid username and password
3. Check Inbox for New Emails.
4. Read the email
5. Signout
Second case:
1. NAvigate to GMAIL
2. Login to Gmail with valid username and password
3. Compose an email
4. send the email
5. Signout
My DOUBTS
Is each Test case is one Class in Java ?
Is Each test Step is a method in java ?
Thanks, some inputs would help me.
It Depends on the complexity and usability of your java-selenium code.
Is each Test case is one Class in Java ?
---> In this case, you can write a method for login functinality, where you will pass username and password as arguments to that method. This method can be called inside any class(any class you are writing to test any other test case also) whenever you need to login.
So, a test case can be a class. If it is a single class, it will be helpful for debugging and maintaining purpose. If the test case is too complex, you can split the functionlity into two classes or more.
Is Each test Step is a method in java ?
---> Yes, it can be. when you are checking the login or signout functionality, you will be calling login method or signout method respectively. Sometimes, if the method cannot be reused and it is specific to an application only, then it will not be a method. You need to explicitly write all the logic instead of calling the already existing method.
It's based on your requirement.
example
public class gmailTest()
{
#BeforeClass
public void beforeClass()
{
1. Navigate to Gmail
2. sign in
}
#BeforeMethod
public void beforeMethod()
{
}
#Test
public void testInbox()
{
Check Inbox for New Emails , Read the email
}
#Test
public void testInbox()
{
Compose an email , send the email
}
#AfterMethod
public void afterMethod()
{
}
#BeforeClass
public void afterClass()
{
signout
}
}
#BeforeClass: The annotated method will be run before the first test method in the current class is invoked.
#AfterClass: The annotated method will be run after all the test methods in the current class have been run.
#BeforeMethod: The annotated method will be run before each test method.
#AfterMethod: The annotated method will be run after each test method.
For more info regarding testng click here
I once had the exact same problem (but I'm using Python).
So this is what I've done:
1) Each class is the Test Case
2) Each method is a Test Step
3) Within the class setup and tear down completely to the initial point. (So it can be used for distribution later)
4) Create the logic of "if one method fails -> the rest of the methods in the class are not run (failed automatically)"
5) (!!) create the logic of "if the method changes the state then add a 'tear down' for it"