Codeception disregard specific error types and relaunch test - codeception

Is there a way to tell the system to restart the test in case a specific rare system error comes up?
Basically sometimes we get strange errors related to elements being "obscured" or "stale", but which do not mean the site is not working etc. It has to do with the site's latency I believe like CSS not loading quickly enough etc.
For example is there a directive to tell the system that if an error like
[Facebook\WebDriver\Exception\ElementClickInterceptedException] Element
\<li id="nav_step0" class="nav-steps selected"> is not clickable at point (330,237)
because another element \<div id="ajaxloading_mask" class="mask"> obscures it
To simply relaunch the test again?

No, there is no way to relaunch failed test on specific error.
You can rerun all failed tests:
codecep run || codecept run -g failed
This command executes all tests, if any tests failed, it reruns only failed tests.

Related

Timed out retrying. Expected to find element but never found it. Failed on CI

I am having an issue in Github Actions.
I am using Cypress to test the frontend of my app, and everything is working perfectly in the Cypress app.
But when I am pushing everything on master in GitHub and run the test via Github Actions, every now and then, the same flaky test is failing with he following error.
Timed out retrying after 5000ms: Expected to find element "xxx" but never found it.
This line is the problematic one:
cy.purposeElement("delete_user_dialog").should('be.visible')
The element we are talking about takes 0.2s to appear with a fade-in animation.
My guess is that the page was slow to respond, while Cypress was fast to act.
How can I avoid solve a flaky test like that?
I could use a cy.wait but it is not recommended and I do not want to increase the time of the test.
Plus, the result is as flaky.
It may be as simple as increasing the timeout, especially if the test works perfectly when running on the local machine.
cy.purposeElement("delete_user_dialog", {timeout:20_000}).should('be.visible')
To be 100% sure, do a burn test which is described here: Burning Tests with cypress-grep

Getting 'Could not find test files' Error when attempting to run TestCafe tests

I'm trying to run some TestCafe tests from our build server, but getting the following error...
"Could not find test files at the following location: "C:\Testing\TestCafe".
Check patterns for errors:
tests/my-test.ts
or launch TestCafe from a different directory."
I did have them running or able to be found on this machine previously, but others have taken over the test coding and changed the structure a bit when moving it to a Git repository. Now when I grab the tests from Git and try to run, the problem presents itself. I'm not sure if there is something in a config file that needs adjustment but don't know where to start looking.
The intention is to have it part of our CI process, but the problem is also seen when I attempt to run the tests from the command line. The build process does install TestCafe, but there is something strange around this as well.
When the build failes with the can't find tests error, if I try to run the following command in the proper location...
tescafe chrome tests/my-test.ts
... I get, 'testcafe' is not recognized as an internal or external command,
operable program or batch file.
Just can't understand why I can't get these tests running. TestCafe setup was pretty much easy previously.
ADDENDUM: I've added a screenshot of the working directory where I cd to and run the testcafe command as well as the tests subdirectory containing the test I'm trying to run.
Any help is appreciated!!
testcafe chrome tests/my-test.ts is just a template; it isn't a real path to your tests. This error means that the path that you set in CLI is wrong, and there aren't any tests. You need to:
Find out where you start CLI. Please attach a screenshot to your question.
Define an absolute path to tests or a path relative to the place where CLI was started. Please share a screenshot of your project tree where the directory with tests is open.
Also, you missed t in the tescafe chrome tests/my-test.ts command. It should be tesTcafe chrome tests/my-test.ts. That is why you get the "'tescafe' is not recognized as an internal or external command" error.
I was able to get things working by starting from scratch. I uninstalled TestCafe and cleaned the working folder. During next build it was fine. I'm sure I've tried this several times, but it just started working.
One positive that came out of it was that I discovered a typo in a test file name, which was also causing issues finding the test I was using to check testing setup.
Thanks for helping!!

Cypress giving false positive and not failing test despite finding wrong text

As mentioned in the title, the cypress client proceeds on to other tests without marking a test as failed where it has in fact found a mismatch in the expectation. This behavior can be seen in the attached image:
This is intermittent but can go completely unnoticed when the tests run in a CI environment.
How am I supposed to debug & fix this issue?

How to force PHPUnit to fail on skipped tests?

I am trying to set up an automated tests using PHPUnit and Selenium with headless firefox. When Travis CI tries to run my tests, Selenium server fails to start, but my test is considered OK, because PHPUnit marks it as skipped:
The Selenium Server is not active on host localhost at port 4444.
OK, but incomplete, skipped, or risky tests!
Tests: 1, Assertions: 0, Skipped: 1.
The command "make test" exited with 0.
In my opinion, test should be considered as failed when it couldn't be even started for internal error. This is really stupid as my tests could fail this way and if didn't read the full report, I could believe everything is in a fact running okay, because Travis CI considers return value 0 to be successful test.
Is there an option to make PHPUnit return non-zero result when there are skipped tests? Or even make it directly report test as failed on Selenium error?
Update:
See the accepted answer ( --fail-on-skipped ), added in version 9.4.3 ( about two years after the question was open )
Answer for previous versions:
Consider configuring the following parameters in your phpunit.xml file.
stopOnError="true"
stopOnFailure="true"
stopOnIncomplete="true"
stopOnSkipped="true"
stopOnRisky="true"
Reference
If you want to use the commandline args equivalents are:
--stop-on-error Stop execution upon first error.
--stop-on-failure Stop execution upon first error or failure.
--stop-on-warning Stop execution upon first warning.
--stop-on-risky Stop execution upon first risky test.
--stop-on-skipped Stop execution upon first skipped test.
--stop-on-incomplete Stop execution upon first incomplete test.
For your case, you want to stop on skipped.
SIDENOTE For a test to be considered FAILED there must exist a failing assertion. Since skipped tests are not actually executed, you cannot consider them as failed, you should rely on the execution summary and make sure that no risky or skipped tests took place.
If you want your risky and warned tests to be considered as a FAILING TEST, you may use the following args:
--fail-on-warning Treat tests with warnings as failures.
--fail-on-risky Treat risky tests as failures.
Reference
If for any reason you would like to make PHPUnit return an exit code other than 0 (success), consider taking a look to How to make PHPunit return nonzero exit status on warnings
Since version 9.4.3, PHPUnit has a --fail-on-skipped option:
$ vendor/bin/phpunit --help
(...)
--fail-on-incomplete Treat incomplete tests as failures
--fail-on-risky Treat risky tests as failures
--fail-on-skipped Treat skipped tests as failures
--fail-on-warning Treat tests with warnings as failures
(...)

Protractor test times out randomly in Docker on Jenkins, works fine in Docker locally

When using the APIs defined by Protractor & Jasmine (the default/supported runner for Protractor), the tests will always work okay on individual developer laptops. For some reason when the test runs on the Jenkins CI server, they will fail (despite being in the same docker containers on both hosts, and that was wildly frustrating.)
This error occurs: A Jasmine spec timed out. Resetting the WebDriver Control Flow.
This error also appears: Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
Setting getPageTimeout & allScriptsTimeout to 30 seconds had no effect on this.
I tried changing jasmine.DEFAULT_TIMEOUT_INTERVAL to 60 seconds for all tests in this suite, once the first error appears then every test will wait the full 60 seconds and time out.
I've read and reread Protractor's page on timeouts but none of that seems relevant to this situation.
Even stranger still, it seems like some kind of buffer issue - at first the tests would always fail on a particular spec, and nothing about that spec looked wrong. While debugging I upgraded the selenium docker container from 2.53.1-beryllium to 3.4.0-einsteinium and the tests still failed but they failed a couple specs down - suggesting that maybe there was some optimization in the update and so it was able to get more done before it gave out.
I confirmed that by rearranging the order of the specs - the specs that had failed consistently before were now passing and a test that previously passed began to fail (but around the same time in the test duration as the other failures before the reorder.)
Environment:
protractor - 5.1.2
selenium/standalone-chrome-debug - 3.4.0-einsteinium
docker - 1.12.5
The solution ended up being simple - I first found it on a chrome bug report, and it turned out it was also listed right on the front page of the docker-selenium repo but the text wasn't clear as to what it was for when I'd read it the first time. (It says that selenium will crash without it, but the errors I was getting from Jasmine were just talking about timeouts, and that was quite misleading.)
Chrome apparently utilizes /dev/shm, and apparently that's fairly small in docker. There are workarounds for chrome and firefox linked from their README that explain how to resolve the issue.
I had a couple test suites fail after applying the fix but all the test suites have been running and passing for the last day, so I think that was actually the problem and that this solution works. Hope this helps!