As mentioned in the title, the cypress client proceeds on to other tests without marking a test as failed where it has in fact found a mismatch in the expectation. This behavior can be seen in the attached image:
This is intermittent but can go completely unnoticed when the tests run in a CI environment.
How am I supposed to debug & fix this issue?
Related
I am having an issue in Github Actions.
I am using Cypress to test the frontend of my app, and everything is working perfectly in the Cypress app.
But when I am pushing everything on master in GitHub and run the test via Github Actions, every now and then, the same flaky test is failing with he following error.
Timed out retrying after 5000ms: Expected to find element "xxx" but never found it.
This line is the problematic one:
cy.purposeElement("delete_user_dialog").should('be.visible')
The element we are talking about takes 0.2s to appear with a fade-in animation.
My guess is that the page was slow to respond, while Cypress was fast to act.
How can I avoid solve a flaky test like that?
I could use a cy.wait but it is not recommended and I do not want to increase the time of the test.
Plus, the result is as flaky.
It may be as simple as increasing the timeout, especially if the test works perfectly when running on the local machine.
cy.purposeElement("delete_user_dialog", {timeout:20_000}).should('be.visible')
To be 100% sure, do a burn test which is described here: Burning Tests with cypress-grep
i m running testcafe on circleci as
testcafe "chrome '--window-size=1280,1080'" test_name --skip-js-errors --skip-uncaught-errors
and although all tests pass the command still return an error without a stack trace or a way to find out from where it coming from
and its occurring randomly
testcafe: 1.17.0
node: circleci/node#5.0.0
platfrom: ubuntu-2004:202111-02,windows-server-2019-vs2019:stable
browser:chrome,firefox,edge all latest
i can't share the code base and tests unfortunately
any feedback will be appreciated
Is there a way to tell the system to restart the test in case a specific rare system error comes up?
Basically sometimes we get strange errors related to elements being "obscured" or "stale", but which do not mean the site is not working etc. It has to do with the site's latency I believe like CSS not loading quickly enough etc.
For example is there a directive to tell the system that if an error like
[Facebook\WebDriver\Exception\ElementClickInterceptedException] Element
\<li id="nav_step0" class="nav-steps selected"> is not clickable at point (330,237)
because another element \<div id="ajaxloading_mask" class="mask"> obscures it
To simply relaunch the test again?
No, there is no way to relaunch failed test on specific error.
You can rerun all failed tests:
codecep run || codecept run -g failed
This command executes all tests, if any tests failed, it reruns only failed tests.
I am trying to set up an automated tests using PHPUnit and Selenium with headless firefox. When Travis CI tries to run my tests, Selenium server fails to start, but my test is considered OK, because PHPUnit marks it as skipped:
The Selenium Server is not active on host localhost at port 4444.
OK, but incomplete, skipped, or risky tests!
Tests: 1, Assertions: 0, Skipped: 1.
The command "make test" exited with 0.
In my opinion, test should be considered as failed when it couldn't be even started for internal error. This is really stupid as my tests could fail this way and if didn't read the full report, I could believe everything is in a fact running okay, because Travis CI considers return value 0 to be successful test.
Is there an option to make PHPUnit return non-zero result when there are skipped tests? Or even make it directly report test as failed on Selenium error?
Update:
See the accepted answer ( --fail-on-skipped ), added in version 9.4.3 ( about two years after the question was open )
Answer for previous versions:
Consider configuring the following parameters in your phpunit.xml file.
stopOnError="true"
stopOnFailure="true"
stopOnIncomplete="true"
stopOnSkipped="true"
stopOnRisky="true"
Reference
If you want to use the commandline args equivalents are:
--stop-on-error Stop execution upon first error.
--stop-on-failure Stop execution upon first error or failure.
--stop-on-warning Stop execution upon first warning.
--stop-on-risky Stop execution upon first risky test.
--stop-on-skipped Stop execution upon first skipped test.
--stop-on-incomplete Stop execution upon first incomplete test.
For your case, you want to stop on skipped.
SIDENOTE For a test to be considered FAILED there must exist a failing assertion. Since skipped tests are not actually executed, you cannot consider them as failed, you should rely on the execution summary and make sure that no risky or skipped tests took place.
If you want your risky and warned tests to be considered as a FAILING TEST, you may use the following args:
--fail-on-warning Treat tests with warnings as failures.
--fail-on-risky Treat risky tests as failures.
Reference
If for any reason you would like to make PHPUnit return an exit code other than 0 (success), consider taking a look to How to make PHPunit return nonzero exit status on warnings
Since version 9.4.3, PHPUnit has a --fail-on-skipped option:
$ vendor/bin/phpunit --help
(...)
--fail-on-incomplete Treat incomplete tests as failures
--fail-on-risky Treat risky tests as failures
--fail-on-skipped Treat skipped tests as failures
--fail-on-warning Treat tests with warnings as failures
(...)
When using the APIs defined by Protractor & Jasmine (the default/supported runner for Protractor), the tests will always work okay on individual developer laptops. For some reason when the test runs on the Jenkins CI server, they will fail (despite being in the same docker containers on both hosts, and that was wildly frustrating.)
This error occurs: A Jasmine spec timed out. Resetting the WebDriver Control Flow.
This error also appears: Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
Setting getPageTimeout & allScriptsTimeout to 30 seconds had no effect on this.
I tried changing jasmine.DEFAULT_TIMEOUT_INTERVAL to 60 seconds for all tests in this suite, once the first error appears then every test will wait the full 60 seconds and time out.
I've read and reread Protractor's page on timeouts but none of that seems relevant to this situation.
Even stranger still, it seems like some kind of buffer issue - at first the tests would always fail on a particular spec, and nothing about that spec looked wrong. While debugging I upgraded the selenium docker container from 2.53.1-beryllium to 3.4.0-einsteinium and the tests still failed but they failed a couple specs down - suggesting that maybe there was some optimization in the update and so it was able to get more done before it gave out.
I confirmed that by rearranging the order of the specs - the specs that had failed consistently before were now passing and a test that previously passed began to fail (but around the same time in the test duration as the other failures before the reorder.)
Environment:
protractor - 5.1.2
selenium/standalone-chrome-debug - 3.4.0-einsteinium
docker - 1.12.5
The solution ended up being simple - I first found it on a chrome bug report, and it turned out it was also listed right on the front page of the docker-selenium repo but the text wasn't clear as to what it was for when I'd read it the first time. (It says that selenium will crash without it, but the errors I was getting from Jasmine were just talking about timeouts, and that was quite misleading.)
Chrome apparently utilizes /dev/shm, and apparently that's fairly small in docker. There are workarounds for chrome and firefox linked from their README that explain how to resolve the issue.
I had a couple test suites fail after applying the fix but all the test suites have been running and passing for the last day, so I think that was actually the problem and that this solution works. Hope this helps!