codeception mark acceptance test as incomplete - codeception

What is the way to mark an acceptance test (FooCest::incompleteTest()) as incomplete in codeception? Since the class in the acceptance test examples on the site don't implement another class it's probably some static call or with the $I.

I know of 2 ways of doing this.
You can throw a PHPUnit exception at the beginning of the test:
throw new \PHPUnit_Framework_IncompleteTestError('your message here');
Or you can use annotations:
/**
* #incomplete
*/

Related

Create and run codeception tests from PHP

I know that Codeception is designed for command line usage. But as it is completely based on PHP, I am pretty sure there must be a way to dynamically/temporarily create a test by PHP.
In my case I am getting acceptance test steps from a database and need to run the tests dynamically with Codeception. I would prefer a way to test it without always having to generate and delete temporary test folders and running the codeception commands on the commandline.
The problem is that Codeception dynamically generates a bunch of config files and scripts when creating a cest. I couldn't make it work by using the Codeception classes.
Does anyone have an idea what's the best way to achieve this?
I think that the best approach would be to implement custom test loader as documented at https://codeception.com/docs/07-AdvancedUsage#Formats
You still have to use placeholder file in each suite to kickoff the loader, but the tests can be loaded from database.
Copy of documentation:
In addition to the standard test formats (Cept, Cest, Unit, Gherkin)
you can implement your own format classes to customise your test
execution. Specify these in your suite configuration:
formats:
- \My\Namespace\MyFormat
Then define a class which implements the LoaderInterface
namespace My\Namespace;
class MyFormat implements \Codeception\Test\Loader\LoaderInterface
{
protected $tests;
protected $settings;
public function __construct($settings = [])
{
//These are the suite settings
$this->settings = $settings;
}
public function loadTests($filename)
{
//Load file and create tests
}
public function getTests()
{
return $this->tests;
}
public function getPattern()
{
return '~Myformat\.php$~';
}
}
Look at existing Loader classes for inspiration: https://github.com/Codeception/Codeception/tree/4.0/src/Codeception/Test/Loader

#Test(enabled=false) is not shown as skipped test case

We have a number of TestNG tests that are disabled due to functionality not yet being present (enabled = false), but when the test classes are executed the disabled tests do not show up as skipped in the TestNG report. We'd like to know at the time of execution how many tests are disabled (i.e. skipped). At the moment we have to count the occurrences of enabled = false in the test classes which is an overhead.
Is there a different annotation to use or something else we are missing so that our test reports can display the number of disabled tests?
For example below method still gets executed : -
#Test(enabled=false)
public void ignoretestcase()
{
System.out.println("Ignore Test case");
}
you can use SkipException
throw new SkipException("Skip the test");
Check this post Skip TestNg test in runtime

Webdriver - Check for exceptions in log files

I have one requirement as follows
- When my #Test method executes, check the log files.
- If there any exception in log files, fail the test case. Else pass the test case
Currently, I have done following implementation
- Clearing the log files (3-4 log files) in #Beforetest code
- Checking exceptions in all log files in #AfterTestCode
But issue is that, when any #Test method pass/fail, control marks that test case execution status as PASS/FAIL and after this althoug there is any exception in my log file, my TC passes.
So can you please suggest me if any workarounds possible for that.
Vishal
Checking exception in the #AfterMethod will not help because it checks the result of the #Test method.
For example :
#Test
Public void testCase(){
}
#AfterMethod
public void tearDown(ITestResult result){
}
In the above sample result is for the #test method class result. If test case is passing it will understand pass in #AfterMethod as well.
Workaround:
Either check in your #Test method and based on that your AfterMethod will work fine considering the fact that #AfterMethod will execute after every test method class.
Create a #AfterClass Method which will check on all test cases whether they are passed or not at the end of the class.

How to write same codeception acceptance test case with many different set of inputs

In codeception acceptance testing, how to run/write same test case for many different set of inputs.
Here is my sample acceptance test (I am using page object oncept)
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,$m);
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,"test#gmail.com");
$I->fillField(login::$password,"test");
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
How do I run same login with multiple set of inputs in acceptance test.
However, I have tried passing inputs in an array by calling the test case in for loop by passing array values as input parameter. In acceptance.php, multiple set of inputs can be passed using if loop.
This runs the test as only 1 test case with different assertions.
But, it runs the test case until it fails for any inputs/assertion. If it fails for any of the assertions, then test case stops executing further & says test case failed.
You can pass parameters through to your login function just as you would with any php function:
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,"test#gmail.com","test");
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I,$username,$password)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,$username);
$I->fillField(login::$password,$password);
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
You'd then want to create a separate cept for each aspect of login that you are looking to test.
Edit:
What you're looking for in relation to one test running through a number of assertions, this breaks the conventions of automated testing. Each test (or cept in this case) should only ever test one aspect. For instance in logging in, you might have one for invalid username, invalid password, too many attempts, etc... Then when/if one test fails, you as the developer knows exactly what aspect has failed and which continue to pass. If all the aspects are wrapped up in one test, then you as the developer don't know the full picture until you start to debug.

How to test Jobs in playframework?

I have:
#OnApplicationStart
public class SomeClass {
.. doJob() ...
}
How I can test it in my Unit Test that doJob() actually launched when application started?
I would argue that this is not a unit test, but an integration test.
You can test your Job, by simply calling it using the syntax new MyJob().now();, but as you are looking to test the #OnApplicationStart function, the you would be better off doing this as a Selenium test, and checking the data that you expect to be made available from the bootstrap job is present.