Is there a way to abort a test suite in TestCafe if a test in the middle fails? - testing

I have a series of tests that are dependent on a step in the middle (such as creating an account). The API that I'm using for this is a kind of brittle (which is a separate problem), and sometimes fails. I'd like to be able to just quit the tests in the middle there when that fails, instead of wait for TestCafe to fail the initial assertions for the next few tests that follow. Is there a way to get the test controller to stop, or signify to the fixture that the tests should stop? I immediately thought of Spock's #Stepwise annotation, but I can't find anything like that in the TestCafe docs.

The Stop on First Fail option stops the entire run once a failed test occurred. If I understand your scenario correctly, you could add an assertion for a successful account creation and if it fails, exit the entire run with this option.
CLI Documentation
API Documentation (under Parameters)

Related

Simulating Exception States in e2e Testing

Here is a situation that I want to test with e2e, but I'm not sure the best way. During a specific workflow, an action requires the backend to go make a rest request. This request should never fail, but in the exceptional case that it does (network connectivity or unexpected downtime), I want to at least handle it gracefully and I want to use selenium to check that it would be handled gracefully in the UI. However, from the UI, the design dictates that there should be no way to get it in this error state via normal function.
The question is, should I code into the application some way of creating this exception via frontend actions just so selenium can check that it's handled gracefully? Would that make the test too synthetic to be useful? Or should I just not create an automated test for this requirement and pray that it never occurs?

With TestCafe and Electron, is there a way to execute script after the last test but before the app has shutdown?

I am using `TestCafe` to test our Electron app and need a way to know when the last test in a fixture has been executed BUT before `TestCafe` shuts our app down.
The standard hooks *(fixture.after, fixture.afterEach)* won't work. In particular, fixture.after won't work as it is called BETWEEN test runs (the test app will have been shutdown) and I need my app to still be around.
If I can get the number of tests active for this test run in the fixture I can count the runs myself and then call my custom code on the last test. If there is another way to do this that would be appreciated as well.
Any insights appreciated,
m
You can create a special 'teardown' fixture, place all necessary code into it, and pass it at the end of the test file list:
testcafe chrome tests/* teardown.js
Take a look at the testcafe-once-hook module which allows you to execute test actions once per fixture. Here is an example how to use it: https://github.com/AlexKamaev/testcafe-once-hook-example.

Retry Failed Automation Test Case from the logical point for E2E Automation

We are trying to automate E2E test cases for an booking application which involves around 60+ steps for each test case. Whenever there is a failure at the final steps it is very much time consuming if we go for traditional retry option since the test case will be executed from step 1 again. On the application we have some logical steps which can be marked somehow through which we would like to achieve resuming the test case from a logical point before the failed step. Say for example, among the 60 steps say every 10th step is a logical point in which the script can be resumed instead of retrying from the step 1. say if the failure is on line number 43 then with the help of booking reference number the test can be resumed from step number 41 since the validation has been completed till step 40 (step 40 is a logical closure point). There might be an option you may suggest to split the test case into smaller modules, which will not work for me since it is an E2E test case for the application which we would want to have in a single Geb Spec. The framework is built using Geb & Spock for Web Application automation. Please share your thoughts / logics on how can we build the recovery scenarios for this case. Thanks for your support.!
As of now i am not able to find out any solution for this kind of problem.
Below are few things which can be done to achieve the same, but before we talk about the solutions, we should also talk about the issues which it will create. You are running E2E test cases and if they fail at step 10 then they should be started from scratch not from step 10 because you can miss important integration defects which are occurring when you perform 10th step after running first 9 steps. For e.g. if you create an account and then immediately search for hotel, you application might through error because its a newly created account, but if you retry it from a step where you are just trying to search for the hotel rooms then it might work, because of the time spent between your test failure and restarting the test, and you will not notice this issue.
Now if you must achieve this then
Create a log every time you reach a checkpoint, which can be a simple text file indicating the test case name and checkpoint number, then use retry analyzer for running the failed tests, inside the test look for the text file with the test case name, if it exists then simple skip the code to the checkpoint mentioned in the text file. It can be used in different ways, for e.g. if your e2e test if going through 3 applications then file can have the test case name and the last passed application name, if you have used page objects then you can write the last successful page object name in the text file and use that for continuing the test.
Above solution is just an idea, because I dont think there are any existing solutions for this issue.
Hope this will give you an idea about how to start working on this problem.
The possible solution to your problem is to first define the way in which you want to write your tests.
I would recommend considering one test Spec (class) as one E2E test containing multiple features.
Also, it is recommended to use opensource Spock Retry project available on GitHub, after implementing RetryOnFailure
your final code should look like:
#RetryOnFailure(times= 2) // times parameter is for retry attempts, default= 0
class MyEndtoEndTest1 extends GebReportingSpec {
def bookingRefNumber
def "First Feature block which cover the Test till a logical step"()
{
// your test steps here
bookingRefNumber = //assign your booking Ref here
}
def "Second Feature which covers a set of subsequent logical steps "()
{
//use the bookingRefNumber generated in the First Feature block
}
def "Third set of Logical Steps"()
{ // your test steps here
}
def "End of the E2E test "()
{ // Your final Test steps here
}
The passing of all the Feature blocks (methods) will signify a successful E2E test execution.
It sounds like your end to end test case is too big and too brittle. What's the reasoning behind needing it all in one script.
You've already stated you can use the booking reference to continue on at a later step if it fails, this seems like a logical place to split your tests.
Do the first bit, output the booking reference to a file. Read the booking reference for the second test and complete the journey, if it fails then a retry won't take anywhere near as long.
If you're using your tests to provide quick feedback after a build and your tests keep failing then I would look to split up the journey into smaller smoke tests, and if required run some overnight end to end tests with as many retries as you like.
The fact it keeps failing suggests your tests, environment or build is brittle.

Preventing asserts from failing tests in Codeception

I've just started exploring automated testing, specifically Codeception, as part of my QA work at a web design studio. The biggest issue I'm experiencing is having Codeception fail a test as soon as an assert fails, no matter where it's placed in the code. If my internet connection hiccups or is too slow, things can become difficult. I was wondering if there were methods to provide more control over when Codeception will fail and terminate a test session, or even better, a way to retry or execute a different block or loop of commands when an assert does fail. For example, I would like to do something similar to the following:
if ( $I->see('Foo') )
{
echo 'Pass';
}
else
{
echo 'Fail';
}
Does anyone have any suggestions that could help accomplish this?
You can use a conditional assertion:
$I->canSeeInCurrentUrl('/user/miles');
$I->canSeeCheckboxIsChecked('#agree');
$I->cantSeeInField('user[name]', 'Miles');
The codeception documentation says:
Sometimes you don't want the test to be stopped when an assertion fails. Maybe you have a long-running test and you want it to run to the end. In this case you can use conditional assertions. Each see method has a corresponding canSee method, and dontSee has a cantSee method.
I'm not sure, if I understand it correctly, but I think, you should try to use Cest.
$ php codecept.phar generate:cest suitename CestName
So you could write one test in one test function. If a test fails, it will abort. You can also configure codeception, that it will not abort and show only the one test which fails in a summary at the end of all tests.
See here in the documentation: https://github.com/Codeception/Codeception/blob/2.0/docs/07-AdvancedUsage.md
Maybe it's better to use:
$I::dontSee('Foo');
Regards

Login only once for several test cases - Selenium, JUnit

I am using Selenium 2.X with JUnit 4.X for automation testing. There are several test cases in the test class. However the for each test cases the a new session is created.
That is for each test case,
a new browser window is opened,
login mechanism is carried out,
generic steps gets executed,
test steps gets executed,
the browser get closed.
Is there any possibility for the below mentioned?
a new browser window is opened,
login mechanism is carried out,
generic steps gets executed,
The above steps are carried out only once
all test steps (methods with #Test) gets executed,
Finally the browser gets closed?
PS: I do not want to club all the test case in a single one?
Thanks,
With every new browser session, Selenium creates a new instance of the browser test profile - so re-invoking will cause you to start afresh.
You requirement, though, appears to be more organizational.
Try working with TestNG. It enables the creation of test suites, which can be executed via a testng.xml. You should be able to script tests in different classes and then call them sequentially, without having to necessarily re-invoke the browser