We've got a fairly large test suite with lots of integration tests, with the following setup:
REE ree-1.8.7-2011.03 under RVM
test/unit with shoulda 2.11.3
integration tests with capybara 1.0.0 in rack/test mode
Rails 3.0.10
spree 0.60.0
Occasionally, but not always, after running the integration test (via rake test:integration) and a couple of test failures or errors, the ruby process with the tests will hang for minutes with full CPU (well, core) utilization, apparantly churning away on something, and then finally display the error output.
My Google-Fu fails here.
Does anybody have suggestions or know of bugs in this area ?
Related
We have JUnit tests that boot up the entire server and then test some functions in the booted server. However this takes a lot longer than when simply booting the server regularly through it's main function, outside of a a junit test. Why could that be?
More concretely, we're using the dropwizard framework with a jetty server. The logs I get after starting within a unit test are
INFO [2022-09-19 15:12:45,985] org.eclipse.jetty.server.Server: Started #60649ms
and the logs I get with a regular application start are
INFO [2022-09-19 15:15:06,887] org.eclipse.jetty.server.Server: Started #13093ms
As you can see, it's around 6 times faster outside of JUnit.
Is there any reason for that? The server spawned in junit isn't 100% identical to the other one but it comes pretty close. I don't see any reason why it should just be that much slower. Is there any "known" reason why this is happening or is it more likely that's something about how we spawn the server in our testing environment?
When trying to find the bottleneck, I couldn't identify one single place that runs slower. It seems that everything is just running slower in the tests.
Edit:
An additional information is that I run on an M1 mac. My previous results, where the tests were 6 times slower than the server start, were with brew install --cask adoptopenjdk11. However when I switched to the brew install --cask zulu-jdk11, now the disparity is much smaller: 18 seconds for test runs, 12 seconds for server starts. It doesn't make the mystery smaller, but it makes me a bit happier.
I am using Hound (https://github.com/HashNuke/hound) for integration testing a Phoenix application. I have chrome and chrome headless working. To get it working I have another terminal window running chromedriver (installed via brew). This feels odd to me. Is there a library or test setup that would feel more "integrated" into the application? What's the Elixir way of doing this?
In the Ruby world there's the webdrivers gem (https://github.com/titusfortner/webdrivers). As far as I know it downloads a specified driver (lets say chromedriver) to $HOME. Then with every test run, the test uses the driver downloaded to that destination to execute the tests.
Before the webdrivers gem there was chromedriver-helper gem. Before that it was phantomjs. These implementations made it so running integration tests required 1: downloading the driver 2: running the test
In Elixir (with Hound) I have my tests working by first running chromedriver --verbose in a terminal split, and in the other screen I run mix test. This works fine but feels disjointed. This adds extra steps, 1: download the driver 2: start the driver 3: run the test 4: stop driver
I could write a script manually to run chromedriver in the background, and stop it after the tests are run.
I am new to the Elixir community and so I've researched a lot. It's still not clear to me if there is a "traveled path" I should go down vs just hooking everything up manually.
Have I missed a recommended abstraction? Is this intentional? Is this "just not created, yet"?
Thank you
Have you checked out wallaby? See https://github.com/keathley/wallaby
I am having a really hard time figuring out as to why the selenium test cases are running slowly with phantomjs ghostdriver. When the developers run the test cases against the dev environment it runs faster(takes 1 hour to complete 5 test cases), but when ran from jenkins, it takes 4 hours.
I turned off the IPV6 on the dev machine, also tried switching to version 1.9.1, but still no improvement on the time taken.
Jenkins Machine
phantomJS: 1.9.2
Jenkins Server: RHEL 5.6 64 bit
JDK: 1.7
Developer Machine
OS: Windows 7 64 bit
JDK: 1.7
phantomJS: 1.9.2
Can someone please help?
Thanks in advance
Are you using driver.quit() or phantom.exit() after each test case as phantomJS process not get killed automatically. If no then it could be the reason of slowing down your test cases.
If your tests do not quit the drivers, your jenkins box will have a lot of open browsers in memory after a while.
For instance, every test that starts, then asserts false somewhere and dies, leaves an unclosed driver. These tend to build up after a while.
Depending on your testing framework.. there may be good solutions around it other than wrapping each test in their own try/catch blocks.
py.test has great fixture functionality. You can have a phantomjsdriver() fixture that opend the browser for each test, then after the test is done (safely, or aborting from a False assertion) it can finalize and close up the driver.
psudocode example: (py.test, python, selenium)
#pytest.fixture
def phantomdriver(request):
driver = webdriver.phantomJS()
def fin():
driver.close()
request.addfinalizer(fin)
In this situation, every test that uses this fixture, no mater how it exits, will end up calling the finilizer fin() to close it.
I have a large suite of Jasmine unit tests that were developed to run on a Selenium Grid using the jasmine:ci rake task provided by the Jasmine 1.3 Ruby gem. There was decent integration between Jasmine 1.3 and Selenium Webdriver and running tests on a remote node was as simple as passing some environment variables:
$ rake jasmine:ci SELENIUM_SERVER="http://hub.localdomain:4444/wd/hub" JASMINE_HOST="http://currenthost" JASMINE_BROWSER="chrome"
In Jasmine 2, this capability is gone, replaced by an integration with Phantomjs. Unfortunately, I can't find any discussion of migration options for people who still need Webdriver support.
Is there a way to run Jasmine 2 tests using Selenium Webdriver? Does anyone know of any existing projects or documentation focused on this integration? My query to the Jasmine dev list has gone unanswered.
On the jasmine team, it seemed to us that most people wanted to run their tests headless, so with 2.0 we made that the default. Running the tests in selenium also made the jasmine gem have a number of dependencies that potentially made it harder to install.
But we also see the value in running jasmine tests in multiple (real) browsers as well. To this end, we extracted the selenium code, including saucelabs integration, to it's own gem. Jasmine core actually uses this gem to runs it's own tests across multiple browsers.
I am using Ruby on Rails under windows, installed with railsinstaller. Everything works fine, except that any command such as rails console or bundle exec rake db:migrate takes on average 8 seconds before executing. (rails s and rails -v are exceptions and take about 1 to 2 sec to launch, which is still abnormally high). I am not talking about the time of the entire command, just the time between when I hit enter and when I see the first output.
During this time, one core of my processor is working at 100%, and there is no load on the hard drive. I really feel like that I am waiting for some kind of timeout to expire, because I don't see why rails console should take that much processing power (I have a Core 2 Duo processor).
Do you have experienced this kind of problem ? What can it be ? How can I investigate this ?
It is spinning up your Rails environment, not just loading an executable. It is not Windows specific. It takes about 10 seconds on my Core2 Duo iMac. I've seen similar delays on Linux boxes. Here is an article that gives some hints that may help.
rails-3-osx-speed-up-console-loading-time