My test suite is using Capybara for feature testing with PhantomJS as the driver for headless testing. We have gotten both up and working on Travis for our CI suite, but we're still getting failures as the suite runs (it works perfectly locally). Upon further examination I have realized that it is always the first feature test to run that is failing, every time, regardless of the order (our tests are executed in a randomized order). After it fails it gives this error:
Capybara::Poltergeist::StatusFailError:
Request to 'http://127.0.0.1:52455/#/login' failed to reach server, check DNS and/or server status
# /home/travis/.rvm/gems/ruby-2.3.0/gems/poltergeist-1.9.0/lib/capybara/poltergeist/browser.rb:351:in `command'
# /home/travis/.rvm/gems/ruby-2.3.0/gems/poltergeist-1.9.0/lib/capybara/poltergeist/browser.rb:34:in `visit'
# /home/travis/.rvm/gems/ruby-2.3.0/gems/poltergeist-1.9.0/lib/capybara/poltergeist/driver.rb:95:in `visit'
# /home/travis/.rvm/gems/ruby-2.3.0/gems/capybara-2.7.0/lib/capybara/session.rb:233:in `visit'
# /home/travis/.rvm/gems/ruby-2.3.0/gems/capybara-2.7.0/lib/capybara/dsl.rb:52:in `block (2 levels) in <module:DSL>'
We are using version 1.9.0 for Poltergeist and 2.1.1 for PhantomJS. Every test that runs after this one work, even if they depend on this strategy to work (i.e. when testing authentication).
Has anybody encountered this issue / have any wisdom to share on it?
Sounds like your app is taking too long to startup, either investigate why your app is starting so slowly or increase the timeout setting for poltergeist in its driver registration.
I ended up solving the issue by adding the following bit of code to spec/spec_helper.rb:
config.before(:suite) do
Capybara.current_driver = Capybara.javascript_driver
end
Which is my understanding will set all Capybara tests to default to the javascript driver from initialization of the test suite - which fixed the problem perfectly.
Related
I am working on a node.js application and would like to know if there is a way I can run all the unit tests from all the sub modules even if there are some test failures to know how many tests are failing in total to start putting the fixes for them. We use mocha for our tests on the back-end and jest for the ui.
Thanks.
The default behavior for mocha is to run all the tests. If it is exiting after the first test failure, that would suggest that you are using the "bail" option typically enabled on the command line with either --bail or -b.
Relevant docs: https://mochajs.org/#-bail-b
It can also be caused by passing the option { bail: true } to mocha.setup(). Look in your test runner and in your package.json.
Lastly, the least likely of these possibilities is that it could also be caused by using this.bail() somewhere in the Mocha test runner.
When using the APIs defined by Protractor & Jasmine (the default/supported runner for Protractor), the tests will always work okay on individual developer laptops. For some reason when the test runs on the Jenkins CI server, they will fail (despite being in the same docker containers on both hosts, and that was wildly frustrating.)
This error occurs: A Jasmine spec timed out. Resetting the WebDriver Control Flow.
This error also appears: Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
Setting getPageTimeout & allScriptsTimeout to 30 seconds had no effect on this.
I tried changing jasmine.DEFAULT_TIMEOUT_INTERVAL to 60 seconds for all tests in this suite, once the first error appears then every test will wait the full 60 seconds and time out.
I've read and reread Protractor's page on timeouts but none of that seems relevant to this situation.
Even stranger still, it seems like some kind of buffer issue - at first the tests would always fail on a particular spec, and nothing about that spec looked wrong. While debugging I upgraded the selenium docker container from 2.53.1-beryllium to 3.4.0-einsteinium and the tests still failed but they failed a couple specs down - suggesting that maybe there was some optimization in the update and so it was able to get more done before it gave out.
I confirmed that by rearranging the order of the specs - the specs that had failed consistently before were now passing and a test that previously passed began to fail (but around the same time in the test duration as the other failures before the reorder.)
Environment:
protractor - 5.1.2
selenium/standalone-chrome-debug - 3.4.0-einsteinium
docker - 1.12.5
The solution ended up being simple - I first found it on a chrome bug report, and it turned out it was also listed right on the front page of the docker-selenium repo but the text wasn't clear as to what it was for when I'd read it the first time. (It says that selenium will crash without it, but the errors I was getting from Jasmine were just talking about timeouts, and that was quite misleading.)
Chrome apparently utilizes /dev/shm, and apparently that's fairly small in docker. There are workarounds for chrome and firefox linked from their README that explain how to resolve the issue.
I had a couple test suites fail after applying the fix but all the test suites have been running and passing for the last day, so I think that was actually the problem and that this solution works. Hope this helps!
I am using Selenium locally to run two .feature files. Both pass locally. Neither of the tests interact with a db (nor will future tests). I would like to use the parallel_tests gem to spin up two selenium web browsers concurrently and run each .feature file. I have tried to follow the readme on gems homepage, but am still having no luck.
I can run rake parallel:features and I get the following output:
Using recorded test runtime
8 processes for 2 features, ~ 0 features per process
However, it then proceeds to fail immediately and informs me that I have not defined the scenarios.
I am using Rails 3.2 and Capybara
I have also tried adding begin; require 'parallel_tests/tasks'; rescue LoadError; end
to the top of my Rakefile which I found elsewhere, but doesn't help.
Try using:
parallel_cucumber -n 2 features
Anyone know any particular reason why a request spec never passes when run with bundle exec rspec spec but passes when run directly bundle exec rspec spec/requests/models_spec.rb?
I have tried the spec in both selenium and poltergeist but get the same result. When I run the whole test suite the specs fail, when I run it individually it passes.
I have a related question concerning a model spec Why would RSpec report multiple validation errors of the same type? that could possibly be related.
Interesting but simple. Threads seems to have gotten intertwined and a button that should be saying Add was saying Manage. Temporary fix was to just delete everything from the database before running each spec.
I want to run my Selenium HTML Test Suite through Jenkins (a continuous integration). The following shows, how the build is configured for the current project:
And here's the console output after commiting a new test for example:
ERROR: The suiteFile is not a file or an url ! Check your build configuration.
Build step 'SeleniumHQ htmlSuite Run' changed build result to FAILURE
Build step 'SeleniumHQ htmlSuite Run' marked build as failure
Publishing Selenium report...
Finished: FAILURE
In fact, I get these log issues even after committing both extensionless test files AND .html files.
SeleniumHQ Jenkins plugin supports only ONE suite file per build step. Try out Selunit to run Selenese suites in batch and across multiple browsers. This tutorial shows hot to setup the test execution in Jenkins/Hudson.
Your suiteFile is written with wildcard as: tests/selenium/*.html. I think it is wrong.
You need to provide the exact/absolute path to your suite without the wildcard as below:
tests/selenium/suite.html