Is there a way to continue the the execution of PHPUnit tests when an assertion fails in the command line?
Is there for an example a flag for the PHPUnit Command like:
phpunit --continueonfailure
or something during an execution of a series of tests
I think PHPUnit by default continues to run tests upon failure. So you have probably defined stopOnFailure="true" in your phpunit.xml. Either remove that entirely or try setting it to true.
There are a couple of flags that you can append to your command that makes PHPUnit stop on error / failure:
--stop-on-defect Stop execution upon first not-passed test
--stop-on-error Stop execution upon first error
--stop-on-failure Stop execution upon first error or failure
--stop-on-warning Stop execution upon first warning
--stop-on-risky Stop execution upon first risky test
--stop-on-skipped Stop execution upon first skipped test
--stop-on-incomplete Stop execution upon first incomplete test
You can read more about phpunit.xml here.
Related
I have a project where I am running 100 scenarios every day. After the run has complete, through listeners I am updating the pass/fail in an Excel sheet. I want to hear about a solution where, if I am running the test suite again, the passed test cases should be skipped and only failed test cases should run. I dont want to use retry.I tried to use skipException in beforeInvocation listener method but the test case is still executing the passed test case. How can I skip the passed test cases and execute only the failed one through listeners ?
Every time before the start of the scenario, it should go to the listener and check the excel sheet whether the scenario is passed or fail. If passed then the scenario should be skipped.
Any help will be greatly appreciated.
Update: I am able to do it through listeners, with skipException, but in my report it is showing test as failed and not as skipped
When you run bdd tests, qaf generates configuration file with name testng-failed-qas.xml under reports dir. You should use that config file to run only failed scenarios.
I run the test on bamboo with selenium technology and the test tab does not show the test failure, how I can view the test failure?
To see test results (including info which test caused fail) you need to add proper parser task to your job. There are many available parser tasks i.e. JUnit Parser, NUnit Parser, TestNG Parser.
The important thing is to move parser task under Final tasks bar (parser will execute even if previous task fail).
You don't have any test failures. In fact, no tests were running in this build as you can see in line
0 test in total
You job may have failed for one of 2 reasons:
Actual failure of the job. Check detailed log on Logs tab to see if this is the case
It may have also failed because you configured it to fail when there was no tests (and in this case there was no tests indeed) as we see. The corresponding configuration would look like this:
I am writing a step definition for a behat in which I need to check if array is empty and if array is not empty print the array and fail the step. To do this I have written following code in step definition.
if (!empty($issues)) {
print_r($issues);
throw new \Exception("Above issues were found for");
}
Currently when exception is shown it stop the execution and does not execute future scenarios.
You cannot fail only a step from the entire scenario.
If any of the steps of a scenario fails then the scenario fails and the rest of the steps from the scenario will not be executed since the rest of the steps should continue the actions from previous ones that failed.
Please recheck the logic of the scenario and review the behat/bdd documentation.
Jeevan, Behat does not stop script execution if one or multiple scenarios fail in a feature file. For example, if one feature file "test.feature" has 10 scenarios, if you have run the command to run the entire feature file as behat features/test.feature, then all the scenarios will run continuously even if Scenario 2 fails.
In the end, you would see the summary as per the screenshot attached.
Behat output formatter to show progress as TAP and fails inline.
https://github.com/drevops/behat-format-progress-fail
I have an Behat testing suite running through Travis CI on Pull Requests. I know that you can add a "--rerun" option to the command line to rerun failed tests, but for me Behat just keeps trying to rerun the failed tests, which eventually times out the test run session.
Is there a way to limit the number of times that failed tests are re-ran? Something like: "behat --rerun=3" for trying to run a failed scenario up to three times?
The only other way I can think to accomplish what I want is to use the database I'm testing Behat against or to write to a file and store test failures and the number of times they have been run.
EDIT:
Locally, running the following command ends up re-running only the one test I purposely made to fail...and it does it in a loop until something happens. Sometimes it was 11 times and sometimes 100+ times.
behat --tags #some_tag
behat --rerun
So, that doesn't match what the behat command line guide states. In my terminal, the help option give me "--rerun Re-run scenarios that failed during last execution." without any mention of the failed scenario file. I'm using a 3.0 version of behat though.
Packages used:
"behat/behat": "~3.0",
"behat/mink": "~1.5",
"behat/mink-extension": "~2.0",
"behat/mink-goutte-driver": "~1.0",
"behat/mink-selenium2-driver": "~1.1",
"behat/mink-browserkit-driver": "~1.1",
"drupal/drupal-extension": "~3.0"
Problem:
Test fail at random due to mainly Guzzle timeout errors going past 30 seconds trying to GET a URL. Sure you could try bumping up the max execution time, but since other tests have no issues and 30 seconds is a long time to wait for a request, I don't think that will fix the issue and it will make my test runs much longer for not a good reason.
It is possible that you might not have an efficient CI setup?
I think that this option should run the failed tests only once and maybe your setup enters in a loop.
If you need to run the failed scenarios for a number of times maybe you should write a script that checks some condition and runs with --rerun.
Be aware that as i see on the Behat CLI guide if no scenario was found in the file where the failed scenarios are saved then all scenarios will be executed.
I don't think that using '--rerun' in CI is good practice. You should do a high level review of the results before deciding to do a rerun.
I checked today the rerun on Behat 3 and it seems there might be a bug related to the rerun option, i saw today some pull requests on github.
One of them is https://github.com/Behat/Behat/pull/857
Related to the timeout you can check if Travis has some timeout to set, it it has enough resources and you can use the same steps to run it from desktop and see the difference for the same test environment.
Also set CURLOPT_TIMEOUT for guzzle with the value needed to pass in case is not to exaggerated and you will need to find other solution to improve the execution speed.
It should not be such an issue to have a higher value because this should be a conditional wait, so if is faster it will not impact the execution time, else it will wait longer for problematic scenarios.
I am using Fitnesse 20130530 to execute a test suite that contains multiple tests. Most of my tests use script tables with SLIM to drive Selenium. I use a Stop Test Exception to stop the execution of a test when one of the method calls raises an exception. Unfortunately, this also stops the execution of the whole suite. Is there a way to just stop the current test and then continue execution with the next test in the suite?
Not in FitNesse itself, but you can build it into your fixtures.
When I had a similar problem I was able to solve it using what we called "fail fast" mode. This was a static variable that could be set to true under certain conditions (typically by an element not found exception or similar).
Our main driver was structured such that we could pass through one spot that could check for that value before calling the browserDriver. This would then skip the broswerDriver calls until the test ended.
The next text would clear the flag and start up again.
You would need to manage the whole process, but it can work.