Behat stops running mid tests on a different test each time? - behat

I am encountering a bizarre problem with Behat where it will "randomly" just stop running even though it hasn't finished all the scenarios.
The strange thing about this is is that it will stop running in a different place each time which means it isn't our gherkin or php and must be something else causing it. This is running it multiple times without changes being made to any Behat related files or the website we're running it on.
Also, we do not get any output saying why it stopped running it just stops.
Anyone else experienced this and if so any advice?

Related

Runner got struck after running the test cases I am using 1.1.0 version [duplicate]

We have currently about 200 test features. We start to face something strange, most of the times tests are just stuck and would not proceed when we run mvn test command as the following:
mvn clean test -Dcucumber.options="--tags $tags" -Dtest=TestRunner -Dkarate.env=$env
Some tests would run as it was perfectly fine. But at some point the rest will just stuck as it it hangs.
We run the tests in parallel using 10 threads.
It stucks like this
Anybody experienced similar things? Any ideas what could possibly went wrong?
Thanks
This should be fixed in 0.9.5.RC3 - it is stable to use for API testing, so I recommend you upgrade.
If anyone faces this problem for any other version of Karate, please understand that the best (and possibly only) way to troubleshoot or solve this - is to follow this process: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue
I actually have the same problem as you but I can't comment because of reputation, my project works with Gradle and I'm using IntelliJ IDEA and JDK1.8(at another moment before all this I tried Jetbrains SDK11 but had the same problem, I downgraded to java 8 and everything worked again) on this ocassion I did as peter said and upgraded to 0.9.5.RC4 but still when I execute some of my features they never end, for example, I'm currently working on a very simple feature that calls another feature for login, it works for many other features but for this one it appears to get to the end of its execution and never go back to the caller feature, as I was running out of options I made a new simple project copied the resources folder I store my features in and my run parallel class and tried again but it behaves in the same way, the execution never ends.
I'll upload an image with my screen while it executes as you can see it's been executing for 15 minutes
projectView

Force a TestCafé to abort the execution of a test if it get stuck

I am working with TestCafé 1.8.1 and testcafe-browser-provider-electron 0.0.14. Let's say that an issue in the application force it to get stuck and hang one test execution, and for that reason, the rest of the test in my suite cannot continue executing. Is there a way to force TestCafé to abort the execution of that test after a timeout and continue running the rest of the test in the suite?
I have faced this issue several times and it's a problem because I am not able to see the results of the rest of the "good" tests, just because of one test that hanged the whole execution.
TestCafe reloads a browser and restarts the latest test if the application got stuck. Currently TestCafe cannot drop this test in such a case.

Karate Tests Stuck on Running Forever

We have currently about 200 test features. We start to face something strange, most of the times tests are just stuck and would not proceed when we run mvn test command as the following:
mvn clean test -Dcucumber.options="--tags $tags" -Dtest=TestRunner -Dkarate.env=$env
Some tests would run as it was perfectly fine. But at some point the rest will just stuck as it it hangs.
We run the tests in parallel using 10 threads.
It stucks like this
Anybody experienced similar things? Any ideas what could possibly went wrong?
Thanks
This should be fixed in 0.9.5.RC3 - it is stable to use for API testing, so I recommend you upgrade.
If anyone faces this problem for any other version of Karate, please understand that the best (and possibly only) way to troubleshoot or solve this - is to follow this process: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue
I actually have the same problem as you but I can't comment because of reputation, my project works with Gradle and I'm using IntelliJ IDEA and JDK1.8(at another moment before all this I tried Jetbrains SDK11 but had the same problem, I downgraded to java 8 and everything worked again) on this ocassion I did as peter said and upgraded to 0.9.5.RC4 but still when I execute some of my features they never end, for example, I'm currently working on a very simple feature that calls another feature for login, it works for many other features but for this one it appears to get to the end of its execution and never go back to the caller feature, as I was running out of options I made a new simple project copied the resources folder I store my features in and my run parallel class and tried again but it behaves in the same way, the execution never ends.
I'll upload an image with my screen while it executes as you can see it's been executing for 15 minutes
projectView

Codeception won't run any tests. Fails with no errors, quits before running any tests

I go to run an acceptance test, but it only outputs
Codeception PHP Testing Framework v2.0.12
Powered by PHPUnit 4.5.1 by Sebastian Bergmann and contributors.
Then it quits with out any error message. No tests run, no error message.
Browser tests were working fine, and only acceptance tests were broken.
I have all ready solved the problem, but I want to create a record for next time I or any one else runs into this problem.
If you codeception is quitting without any errors or fail messages, it means that there is an error in your code somewhere. I found the error in me Acceptance Helper file, where I had a duplicate of a function. Functions can not have the same name in php, so everything fails. But codeception does not output any error messages.
In order to solve this problem, you need to look through your Helper functions to find a syntax error, or it could be in your actual tests.
The reason it fails is codeception hits an error in the php code, and dies, not throwing any errors. Leaving you confused and frustrated. Now you can find this question and get back to doing what you're doing.
YAY!
There is a syntax error somewhere in your testing code, find it and get rid of it, and it will work. I'd bet the error is in the AcceptanceHelper.php or Browser or what ever suit won't run tests.
If you're seeing this behaviour (codeception quits without any error messages) there is likely a fatal PHP error happening somewhere, but it's not necessarily in your own code or in your codeception generated files.
Depending on your PHP configuration, these errors should show up in an error log, even if they're not output to the console.
For example, if you're using MAMP on Mac, the error log is here by default: /Applications/MAMP/logs/php_error.log
Clear the PHP error log, run your (non-working) test, and check the log again. It should give more insight.
In my case, it was a matter of running codeception 4 on Laravel 5.5, and hence missing classes from the symfony/service-contracts package. Installing this package with composer moved past the "invisible" problem (though I ultimately had to downgrade to composer 3, since there doesn't seem to be a compatible set of symfony/Laravel 5.5/composer 4 packages).

Re-running failed and not-run tests in IntelliJ IDEA

Let me describe a simple use-case:
Running all tests in our project may take up to 10 minutes.
Sometimes I see an obvious bug in my code after the first failed test, so I want to stop running all tests, fix the bug and re-run them. Unfortunately, I can either re-run all tests from the beginning, or re-run failed tests only.
Is there a plugin for IDEA which allows me to re-run failed tests AND tests, which weren't yet executed when I pressed "STOP"?
Atlassian has the solution for your problem: Clover. But it is commercial.
This goes against the idea of a test suite. Normally you want to run all your tests specifically so you know you haven't broken anything somewhere unexpected. If you change the code and then run a subset of the tests, the possibility exists that you broke something and one of the skipped tests would have failed. This is a case of not getting your cake and eating it too.
If you find a bug in an early test, by all means stop the suite. Fix the bug but then run the suite from the beginning.