Preventing asserts from failing tests in Codeception - codeception

I've just started exploring automated testing, specifically Codeception, as part of my QA work at a web design studio. The biggest issue I'm experiencing is having Codeception fail a test as soon as an assert fails, no matter where it's placed in the code. If my internet connection hiccups or is too slow, things can become difficult. I was wondering if there were methods to provide more control over when Codeception will fail and terminate a test session, or even better, a way to retry or execute a different block or loop of commands when an assert does fail. For example, I would like to do something similar to the following:
if ( $I->see('Foo') )
{
echo 'Pass';
}
else
{
echo 'Fail';
}
Does anyone have any suggestions that could help accomplish this?

You can use a conditional assertion:
$I->canSeeInCurrentUrl('/user/miles');
$I->canSeeCheckboxIsChecked('#agree');
$I->cantSeeInField('user[name]', 'Miles');
The codeception documentation says:
Sometimes you don't want the test to be stopped when an assertion fails. Maybe you have a long-running test and you want it to run to the end. In this case you can use conditional assertions. Each see method has a corresponding canSee method, and dontSee has a cantSee method.

I'm not sure, if I understand it correctly, but I think, you should try to use Cest.
$ php codecept.phar generate:cest suitename CestName
So you could write one test in one test function. If a test fails, it will abort. You can also configure codeception, that it will not abort and show only the one test which fails in a summary at the end of all tests.
See here in the documentation: https://github.com/Codeception/Codeception/blob/2.0/docs/07-AdvancedUsage.md
Maybe it's better to use:
$I::dontSee('Foo');
Regards

Related

Is it possible to call a test feature from a MockServer feature(matched) in KarateDSL?

I'm trying to do something that I don't know if it is even remotely possible or not.
I've a Mock server, and I'd like that when it receives a given request, it "starts another test", calling a test feature. I tried some stuff, including the one bellow. But turns out that this Mockserver scenario do not respond.
Scenario: pathMatches('/ideas')
* def xx = call read('SimpleStart.feature')
* def response = $ideas.*
Is there an elegant way to make this work? AN workaround or a suggestion you can give me?
The use case is:
Perform tests, some tests, make some external services invoke the mockserver, and if the mockserver is requested it triggers other tests.
Thanks in advance.
Yeah Karate certainly isn't designed to do that. The pattern should be set up your mocks and tests from a Java "runner" for maximum control and that's what most teams do.
In short, "orchestrate" things from Java code.
That said, see if this gives you some other creative ideas: https://twitter.com/getkarate/status/1417023536082812935

Testcafe: Ability to tell value of disableMultipleWindows inside of test?

I have a test that sometimes gets run with the command line flag --disable-multiple-windows and sometimes it doesn't. I would like the test to skip over a section if multiple windows are disabled. Is there an easy way to tell that within a test?
My best solution so far is something like the following
let disableMultipleWindows=true;
try{
await t.openWindow('www.google.com');
disableMultipleWindows=false
await t.closeWindow();
}
But I'm wondering if there's a better way.
At present, disabling support for multiple browser windows is possible only for tests run scenarios. There is no access to the --disable-multiple-windows option within a test body. You can learn more about this option at Disable Support for Multiple Windows.
Could you please clarify your scenario of using the --disable-multiple-windows option? I think you can split your tests into different tests suites: with/without multiple browser windows.

Is there a way to abort a test suite in TestCafe if a test in the middle fails?

I have a series of tests that are dependent on a step in the middle (such as creating an account). The API that I'm using for this is a kind of brittle (which is a separate problem), and sometimes fails. I'd like to be able to just quit the tests in the middle there when that fails, instead of wait for TestCafe to fail the initial assertions for the next few tests that follow. Is there a way to get the test controller to stop, or signify to the fixture that the tests should stop? I immediately thought of Spock's #Stepwise annotation, but I can't find anything like that in the TestCafe docs.
The Stop on First Fail option stops the entire run once a failed test occurred. If I understand your scenario correctly, you could add an assertion for a successful account creation and if it fails, exit the entire run with this option.
CLI Documentation
API Documentation (under Parameters)

assert and verify in Selenium

Can someone explain what the difference between assert and verify is please.
I know that verify means it checks if it is there, if it isn't the test fails and stops there (correct?).
So does assert carry on even if it does fail?
I've read the documentation and still can't get my head round it.
Nope, you've got it backwards. In Selenium IDE, both verifyWhatever and assertWhatever commands determine if the specified condition is true, and then different things happen. The assertWhatever command fails the test immediately if the condition is false. The verifywhatever command allows the test to continue, but will cause it to fail when it ends. Thus, if your test requires you to check for the presence of several items, none of which are present, assertElementPresent will fail on the first, while verifyElementPresent will fail reporting that all are missing.
The down side to verifyWhatever is that you really can't trust the behavior of any test after one of its verifications fails. Since the application isn't responding correctly, you have no way of knowing whether subsequent assertion or verification failures are valid or are the result of the earlier failures. Thus some of us think verifyWhatever commands are Evil.

Create screenshot after failed selenium command

The PHPUnit Selenium base class has an option to make a screenshot on failure, which is a huge help in finding out why the test failed. The selenium server, however, returns an error instead of a failure on any error condition other than explicit assert* calls (such us trying to do something with a non-existent element). If I try to make a screenshot after the server reports the error, I get another error saying that the server already discarded the session. Is there any way to change that behavior?
Update: this is because PHPUnit breaks the connection when it receives an error. I was able to change it by some (rather ugly) manipulation of the PHPUnit code.
Make those interactions as test cases.
For example in perl,
If it is written as below and fails due to a non-existent element. the script will error out
$sel->type("email-id","trial\#trial.com");
While if the above step is made as a test case by writing it as follows
$sel->type_ok("email-id","trial\#trial.com");
If there is a non-existent element, the test case will only fail, and the script will continue.
So using TAP (test any protocol) by using the module use Test::More; , if _ok is added after a function, the function return will be used to determine the fate of the test case.
ie. - A return of 'O' means the test Failed
and A return of '1' means the test Passed
It is not the Selenium server but the SeleniumTestCase class for PHPUnit 3.4 which automatically sends a stop command when it detects an error (Driver.php line 921). PHPUnit 3.6 seems handle errors better.
I think you can overwrite method 'travelbox' and make something like this:
public function onNotSuccessfulTest(Exception $e){
file_put_content('/xxx/xxx.jpg', $this->currentScreenshot());
}