#Test(enabled=false) is not shown as skipped test case - selenium

We have a number of TestNG tests that are disabled due to functionality not yet being present (enabled = false), but when the test classes are executed the disabled tests do not show up as skipped in the TestNG report. We'd like to know at the time of execution how many tests are disabled (i.e. skipped). At the moment we have to count the occurrences of enabled = false in the test classes which is an overhead.
Is there a different annotation to use or something else we are missing so that our test reports can display the number of disabled tests?
For example below method still gets executed : -
#Test(enabled=false)
public void ignoretestcase()
{
System.out.println("Ignore Test case");
}

you can use SkipException
throw new SkipException("Skip the test");
Check this post Skip TestNg test in runtime

Related

Retrieve output log when test fails during setup

I'm running automated unit tests with SpecFlow and Selenium. SpecFlow offers BeforeTestRun, BeforeFeature, and BeforeScenario attributes to execute code between tests at the appropriate time.
I'm also using log4net to log test output.
When a test fails during the test or during the BeforeScenario phase, I can see the output logged.
But when a test fails during BeforeTestRun or BeforeFeature, there is no output available.
This makes it difficult to diagnose when the test fails during the early stages on the remote testing server and all I have are the output logs.
Is there any way to use log4net to get output logs when the test fails before the individual test has begun?
You can implement custom method, with reference of TestResult and TestContext objects. And you can call it in [TearDown] or somewhere on end of the Test:
if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Failed")
{
string message = TestContext.CurrentContext.Result.Message;
string logs = TestContext.CurrentContext.Result.StackTrace;
}
else if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Passed")
{
//User defined action
}
else
{
string otherlog = TestContext.CurrentContext.Result.Outcome.Status.ToString();
}
Actually it's weird that it doesn't show you log when a tests is failed.
It shows it well for me:
What I suggest to try is to check that you log4net pushes all the logs to console. Actually if you haven't done some special manipulation, then your logger should by default have a console appender.
I initiate my logger like this:
private static readonly ILog log =
LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
Another guess is that maybe when your test fails on OneTimeSetup Fixture it doesn't have any log yet.

Extent report not generated when script abruptly ends

I have POM based selenium framework,i am using extent reports and the reports are generated fine if all of the scripts run. If one of the script fails abruptly due to browser disappear then my script is failing and then report is not generated.
Ex: I have 3 scripts to run as part of my driver script, when 3rd script is running if something goes wrong(like browser disappear) then report is not generated. I want extent report generated whenever it fails/stops. My driver script has extent.flush which is the last execution as part of scripts. How can we generate report on failure. If the failure is due to object not found then i am able to get the report.
How to generate report whenever i stop the execution?
Any help is greatly appreciated.
thanks
Raju
I'm assuming that you are using TestNG if that is true add the parameter alwaysRun = true in the TestNG methods e.g.
#BeforeMethod(alwaysRun = true)
public void beforeMethod()
//your_code
}
#AfterMethod(alwaysRun = true)
public void afterMethod() {
//your_code
}

How to get disabled test cases count in jenkins result?

I have suppose 10 test cases in test suite in which 2 test cases are disabled.I want to get those two test cases in test result of jenkins job like pass = 7 ,fail = 1 and disabled/notrun= 2.
By default, TestNG generates report for your test suite and you may refer to index.html file under the test-output folder. If you click on "Ignored Methods" hyperlink, it will show you all the ignored test cases and its class name and count of ignored methods.
All test cases annotated with #Test(enabled = false) will be showing in "Ignored Methods" link.
I have attached a sample image. Refer below.
If your test generates JUnit XML reports, you can use the JUnit plugin to parse these reports after the build (as a post-build action). Then, you can go into your build and click 'Test Result'. You should see a breakdown of how the execution went (including passed, failed, and skipped tests).

how to debug a single testng test with parameters in intelliJ without using #Optional?

I have following test case that I want to debug in IntelliJ. I don't want to use #Optional("defaultValue") annotation because I want to debug the test with a real value that changes every time I debug. It is not handy to set default values every time I want to run tests.
#Test(parameters = { "param1"})
public void testExmaple(String param1){
//do something with param1
}
So, Is there are way to define the test data somewhere in intelliJ so that when I right-click and debug, it should pick the value i.e. param1 ? Or may be there is a testng plugin to do that ?
NOTE: I don't want to use command-line maven+surefire
Just configure the parameters part of the IntelliJ runner: https://www.jetbrains.com/help/idea/2016.1/run-debug-configuration-testng.html?origin=old_help#config

How to write same codeception acceptance test case with many different set of inputs

In codeception acceptance testing, how to run/write same test case for many different set of inputs.
Here is my sample acceptance test (I am using page object oncept)
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,$m);
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,"test#gmail.com");
$I->fillField(login::$password,"test");
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
How do I run same login with multiple set of inputs in acceptance test.
However, I have tried passing inputs in an array by calling the test case in for loop by passing array values as input parameter. In acceptance.php, multiple set of inputs can be passed using if loop.
This runs the test as only 1 test case with different assertions.
But, it runs the test case until it fails for any inputs/assertion. If it fails for any of the assertions, then test case stops executing further & says test case failed.
You can pass parameters through to your login function just as you would with any php function:
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,"test#gmail.com","test");
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I,$username,$password)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,$username);
$I->fillField(login::$password,$password);
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
You'd then want to create a separate cept for each aspect of login that you are looking to test.
Edit:
What you're looking for in relation to one test running through a number of assertions, this breaks the conventions of automated testing. Each test (or cept in this case) should only ever test one aspect. For instance in logging in, you might have one for invalid username, invalid password, too many attempts, etc... Then when/if one test fails, you as the developer knows exactly what aspect has failed and which continue to pass. If all the aspects are wrapped up in one test, then you as the developer don't know the full picture until you start to debug.