Continue Behat execution on exception - testing

I am writing a step definition for a behat in which I need to check if array is empty and if array is not empty print the array and fail the step. To do this I have written following code in step definition.
if (!empty($issues)) {
print_r($issues);
throw new \Exception("Above issues were found for");
}
Currently when exception is shown it stop the execution and does not execute future scenarios.

You cannot fail only a step from the entire scenario.
If any of the steps of a scenario fails then the scenario fails and the rest of the steps from the scenario will not be executed since the rest of the steps should continue the actions from previous ones that failed.
Please recheck the logic of the scenario and review the behat/bdd documentation.

Jeevan, Behat does not stop script execution if one or multiple scenarios fail in a feature file. For example, if one feature file "test.feature" has 10 scenarios, if you have run the command to run the entire feature file as behat features/test.feature, then all the scenarios will run continuously even if Scenario 2 fails.
In the end, you would see the summary as per the screenshot attached.

Behat output formatter to show progress as TAP and fails inline.
https://github.com/drevops/behat-format-progress-fail

Related

How to Skip the passed test cases in Cucumber-QAF setup

I have a project where I am running 100 scenarios every day. After the run has complete, through listeners I am updating the pass/fail in an Excel sheet. I want to hear about a solution where, if I am running the test suite again, the passed test cases should be skipped and only failed test cases should run. I dont want to use retry.I tried to use skipException in beforeInvocation listener method but the test case is still executing the passed test case. How can I skip the passed test cases and execute only the failed one through listeners ?
Every time before the start of the scenario, it should go to the listener and check the excel sheet whether the scenario is passed or fail. If passed then the scenario should be skipped.
Any help will be greatly appreciated.
Update: I am able to do it through listeners, with skipException, but in my report it is showing test as failed and not as skipped
When you run bdd tests, qaf generates configuration file with name testng-failed-qas.xml under reports dir. You should use that config file to run only failed scenarios.

Log name of Scenario when it starts

Is there a way to get Karate to automatically print the name of each scenario as it is executed into the logs? We have a reasonably large suite that generates ~25MB of log data in our Jenkins pipeline console output and sometimes it’s a little tricky trying to match a line where com.intuit.karate logs an ERROR to the failure summary at the end of the run. It is most likely possible to obtain the scenario name and print() it but that would mean adding code to many hundred scenarios which I’d like to avoid.
As part of the fix for this issue Karate will log the Scenario name (if not empty) along with any failure trace.
A beta version with this fix is available 0.6.1.2 it would be great if you can try it and confirm.
If you feel more has to be done, do open a ticket and we'll do our best to get this into the upcoming 0.6.2 release.

Rerun Failed Behat Tests Only A Set Number of Times

I have an Behat testing suite running through Travis CI on Pull Requests. I know that you can add a "--rerun" option to the command line to rerun failed tests, but for me Behat just keeps trying to rerun the failed tests, which eventually times out the test run session.
Is there a way to limit the number of times that failed tests are re-ran? Something like: "behat --rerun=3" for trying to run a failed scenario up to three times?
The only other way I can think to accomplish what I want is to use the database I'm testing Behat against or to write to a file and store test failures and the number of times they have been run.
EDIT:
Locally, running the following command ends up re-running only the one test I purposely made to fail...and it does it in a loop until something happens. Sometimes it was 11 times and sometimes 100+ times.
behat --tags #some_tag
behat --rerun
So, that doesn't match what the behat command line guide states. In my terminal, the help option give me "--rerun Re-run scenarios that failed during last execution." without any mention of the failed scenario file. I'm using a 3.0 version of behat though.
Packages used:
"behat/behat": "~3.0",
"behat/mink": "~1.5",
"behat/mink-extension": "~2.0",
"behat/mink-goutte-driver": "~1.0",
"behat/mink-selenium2-driver": "~1.1",
"behat/mink-browserkit-driver": "~1.1",
"drupal/drupal-extension": "~3.0"
Problem:
Test fail at random due to mainly Guzzle timeout errors going past 30 seconds trying to GET a URL. Sure you could try bumping up the max execution time, but since other tests have no issues and 30 seconds is a long time to wait for a request, I don't think that will fix the issue and it will make my test runs much longer for not a good reason.
It is possible that you might not have an efficient CI setup?
I think that this option should run the failed tests only once and maybe your setup enters in a loop.
If you need to run the failed scenarios for a number of times maybe you should write a script that checks some condition and runs with --rerun.
Be aware that as i see on the Behat CLI guide if no scenario was found in the file where the failed scenarios are saved then all scenarios will be executed.
I don't think that using '--rerun' in CI is good practice. You should do a high level review of the results before deciding to do a rerun.
I checked today the rerun on Behat 3 and it seems there might be a bug related to the rerun option, i saw today some pull requests on github.
One of them is https://github.com/Behat/Behat/pull/857
Related to the timeout you can check if Travis has some timeout to set, it it has enough resources and you can use the same steps to run it from desktop and see the difference for the same test environment.
Also set CURLOPT_TIMEOUT for guzzle with the value needed to pass in case is not to exaggerated and you will need to find other solution to improve the execution speed.
It should not be such an issue to have a higher value because this should be a conditional wait, so if is faster it will not impact the execution time, else it will wait longer for problematic scenarios.

Fitnesse: How to stop one test run when executing a suite

I am using Fitnesse 20130530 to execute a test suite that contains multiple tests. Most of my tests use script tables with SLIM to drive Selenium. I use a Stop Test Exception to stop the execution of a test when one of the method calls raises an exception. Unfortunately, this also stops the execution of the whole suite. Is there a way to just stop the current test and then continue execution with the next test in the suite?
Not in FitNesse itself, but you can build it into your fixtures.
When I had a similar problem I was able to solve it using what we called "fail fast" mode. This was a static variable that could be set to true under certain conditions (typically by an element not found exception or similar).
Our main driver was structured such that we could pass through one spot that could check for that value before calling the browserDriver. This would then skip the broswerDriver calls until the test ended.
The next text would clear the flag and start up again.
You would need to manage the whole process, but it can work.

PHPUnit & Selenium code coverage - coverage metrics stop halfway through test

I'm just getting started with PHPUnit and Selenium, yet one problem has been bothering me: I can't seem to get correct coverage figures.
My app takes a user through a multi-step process that involves multiple pages, each of which is handled in PHP by a display function (to output HTML) and a processing function (to handle the results of POST operations). My baseline test runs through the entire process, and completes correctly having visited each of about seven pages. I've both verified this visually and through assertions in the testcase itself.
This issue is that the coverage report indicates that only the first couple of functions are executed and that the others are never visited (despite my visual and testcase checks). I thought the problem was a PHP Notice that occurred during the first function and that might stop XDebug/PHPUnit from collecting stats, but I fixed this and the problem remains.
Is there anything that can stop collection of coverage statistics mid-way through a test? All the functions in question are in the same file and are called from a (different) central PHP script which chooses which function to call based on an incrementing session variable.