Retrieve output log when test fails during setup - selenium

I'm running automated unit tests with SpecFlow and Selenium. SpecFlow offers BeforeTestRun, BeforeFeature, and BeforeScenario attributes to execute code between tests at the appropriate time.
I'm also using log4net to log test output.
When a test fails during the test or during the BeforeScenario phase, I can see the output logged.
But when a test fails during BeforeTestRun or BeforeFeature, there is no output available.
This makes it difficult to diagnose when the test fails during the early stages on the remote testing server and all I have are the output logs.
Is there any way to use log4net to get output logs when the test fails before the individual test has begun?

You can implement custom method, with reference of TestResult and TestContext objects. And you can call it in [TearDown] or somewhere on end of the Test:
if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Failed")
{
string message = TestContext.CurrentContext.Result.Message;
string logs = TestContext.CurrentContext.Result.StackTrace;
}
else if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Passed")
{
//User defined action
}
else
{
string otherlog = TestContext.CurrentContext.Result.Outcome.Status.ToString();
}

Actually it's weird that it doesn't show you log when a tests is failed.
It shows it well for me:
What I suggest to try is to check that you log4net pushes all the logs to console. Actually if you haven't done some special manipulation, then your logger should by default have a console appender.
I initiate my logger like this:
private static readonly ILog log =
LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
Another guess is that maybe when your test fails on OneTimeSetup Fixture it doesn't have any log yet.

Related

Extent report not generated when script abruptly ends

I have POM based selenium framework,i am using extent reports and the reports are generated fine if all of the scripts run. If one of the script fails abruptly due to browser disappear then my script is failing and then report is not generated.
Ex: I have 3 scripts to run as part of my driver script, when 3rd script is running if something goes wrong(like browser disappear) then report is not generated. I want extent report generated whenever it fails/stops. My driver script has extent.flush which is the last execution as part of scripts. How can we generate report on failure. If the failure is due to object not found then i am able to get the report.
How to generate report whenever i stop the execution?
Any help is greatly appreciated.
thanks
Raju
I'm assuming that you are using TestNG if that is true add the parameter alwaysRun = true in the TestNG methods e.g.
#BeforeMethod(alwaysRun = true)
public void beforeMethod()
//your_code
}
#AfterMethod(alwaysRun = true)
public void afterMethod() {
//your_code
}

#Test(enabled=false) is not shown as skipped test case

We have a number of TestNG tests that are disabled due to functionality not yet being present (enabled = false), but when the test classes are executed the disabled tests do not show up as skipped in the TestNG report. We'd like to know at the time of execution how many tests are disabled (i.e. skipped). At the moment we have to count the occurrences of enabled = false in the test classes which is an overhead.
Is there a different annotation to use or something else we are missing so that our test reports can display the number of disabled tests?
For example below method still gets executed : -
#Test(enabled=false)
public void ignoretestcase()
{
System.out.println("Ignore Test case");
}
you can use SkipException
throw new SkipException("Skip the test");
Check this post Skip TestNg test in runtime

How to write same codeception acceptance test case with many different set of inputs

In codeception acceptance testing, how to run/write same test case for many different set of inputs.
Here is my sample acceptance test (I am using page object oncept)
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,$m);
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,"test#gmail.com");
$I->fillField(login::$password,"test");
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
How do I run same login with multiple set of inputs in acceptance test.
However, I have tried passing inputs in an array by calling the test case in for loop by passing array values as input parameter. In acceptance.php, multiple set of inputs can be passed using if loop.
This runs the test as only 1 test case with different assertions.
But, it runs the test case until it fails for any inputs/assertion. If it fails for any of the assertions, then test case stops executing further & says test case failed.
You can pass parameters through to your login function just as you would with any php function:
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,"test#gmail.com","test");
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I,$username,$password)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,$username);
$I->fillField(login::$password,$password);
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
You'd then want to create a separate cept for each aspect of login that you are looking to test.
Edit:
What you're looking for in relation to one test running through a number of assertions, this breaks the conventions of automated testing. Each test (or cept in this case) should only ever test one aspect. For instance in logging in, you might have one for invalid username, invalid password, too many attempts, etc... Then when/if one test fails, you as the developer knows exactly what aspect has failed and which continue to pass. If all the aspects are wrapped up in one test, then you as the developer don't know the full picture until you start to debug.

Teamcity rerun specific failed unstable tests

I have Teamcity 7.1 and around 1000 tests. Many tests are unstable and fail randomly. Even a single test fails the whole build fails and to run a new build takes 1 hour.
So I would like to be able to configure Teamcity to rerun failed tests within the same build a specific number of time. Any success for a test should be considered as success, not a failure. Is it possible?
Also now is tests in some module fail Teamcity does not proceed to the next module. How to fix it?
With respect, I think you might have this problem by the wrong end. A test that randomly fails is not providing you any value as a metric of deterministic behavior. Either fix the randomness (through use of mocks, etc.) or ignore the tests.
If you absolutely have to I'd put loops round some of your test code and catch say 5 failures before throwing the exception as a 'genuine' failure. Something like this C# example would do...
public static void TestSomething()
{
var counter = 0;
while (true)
{
try
{
// add test code here...
return;
}
catch (Exception) // catch more specific exception(s)...
{
if (counter == 4)
{
throw;
}
counter++;
}
}
}
While I appreciate the problems that can arise with testing asych code, I'm with #JohnHoerr on this one, you really need to fix the tests.
Rerun failed tests feature is part of Maven Surefire Plugin, if you execute mvn -Dsurefire.rerunFailingTestsCount=2 test
then tests will be run until they pass or the number of reruns has been exhausted.
Of course, -Dsurefire.rerunFailingTestsCount can be used in TeamCity or any other CI Server.
See:
http://maven.apache.org/surefire/maven-surefire-plugin/examples/rerun-failing-tests.html

How to test Jobs in playframework?

I have:
#OnApplicationStart
public class SomeClass {
.. doJob() ...
}
How I can test it in my Unit Test that doJob() actually launched when application started?
I would argue that this is not a unit test, but an integration test.
You can test your Job, by simply calling it using the syntax new MyJob().now();, but as you are looking to test the #OnApplicationStart function, the you would be better off doing this as a Selenium test, and checking the data that you expect to be made available from the bootstrap job is present.