We have JUnit tests that boot up the entire server and then test some functions in the booted server. However this takes a lot longer than when simply booting the server regularly through it's main function, outside of a a junit test. Why could that be?
More concretely, we're using the dropwizard framework with a jetty server. The logs I get after starting within a unit test are
INFO [2022-09-19 15:12:45,985] org.eclipse.jetty.server.Server: Started #60649ms
and the logs I get with a regular application start are
INFO [2022-09-19 15:15:06,887] org.eclipse.jetty.server.Server: Started #13093ms
As you can see, it's around 6 times faster outside of JUnit.
Is there any reason for that? The server spawned in junit isn't 100% identical to the other one but it comes pretty close. I don't see any reason why it should just be that much slower. Is there any "known" reason why this is happening or is it more likely that's something about how we spawn the server in our testing environment?
When trying to find the bottleneck, I couldn't identify one single place that runs slower. It seems that everything is just running slower in the tests.
Edit:
An additional information is that I run on an M1 mac. My previous results, where the tests were 6 times slower than the server start, were with brew install --cask adoptopenjdk11. However when I switched to the brew install --cask zulu-jdk11, now the disparity is much smaller: 18 seconds for test runs, 12 seconds for server starts. It doesn't make the mystery smaller, but it makes me a bit happier.
Related
I'm just getting back into trying some front end projects for the first time in a few years. Many npm-based javascript projects I try out end up taking a long time to start up in development mode even for Hello World-ish examples. In particular I'm trying out Nuxt.js.
Dev server startup takes about 100 seconds, and nothing seems to get cached so restarts (not hot reloads) take the same amount of time. My research into the project and known npm issues did not turn up any definitive root cause or ways to improve this yet.
I'm using emacs 26.1 in terminal mode on a 2018 13" MacBook Pro with a core i5, 8 GB of ram, and an SSD.
When I run npm run dev to startup the nuxt dev server I get repeated error in process filter: Args out of range: "\342", -1 errors related to some unusual characters they are using to try to make the output pretty. If I try the same thing in a vanilla Mac OS terminal the server startup goes 10x faster. Why do those errors occur, and why is it so much slower in an emacs terminal?
It turns out the repeated error in process filter issue may be caused by a bug in term mode that was recently fixed but might still be an issue in my version of emacs.
As a workaround, the following can get the nuxt dev server running in ~10s instead of ~100s in an emacs terminal on my mac by filtering out the repeated lines about the modules being built:
$ npm run dev | grep -v modules
Note that I tried using npm's options to adjust the log level but none seem to filter this output. If anyone knows a more "official" way of filtering this, or even better, if you know how to make it such that it doesn't try to rebuild the modules on every dev server start up, I'd be interested to know.
Edit: it might make sense to adjust the dev script command in the package.json file to include the grep filter, that way you can still just type npm run dev and get the workaround.
I've spent about a day looking for solutions around the web to my issue but none work for me.
Here is my scenario:
I am running Selenium scripts with ChromeDriver using pyATS framework on my Ubuntu 18.04 VM. The VM has 4 GB of memory. I also have setup Jenkins on the machine and am trying to run the pyATS script with the pyATS plugin.
When running headless mode from the terminal, the script runs in the same or faster time than non-headless mode. However, when I run in Jenkins on the same machine, I am getting extreme slowdowns. It looks almost as if Jenkins is running my script in sections, with >2 minutes of delay in between steps at random.
I've tried out Xvfb, headless with various chrome options (noproxy, proxy options, gpu disable, etc), increasing heap memory for jenkins, but I always get the same random 2 min of delay in between script steps.
The script doesn't fail - it will complete eventually. But for a step that I expect to take around 2 min, jenkins will take 10 minutes.
I currently don't have a way to increase the memory my VM has, but are there any other solutions that I can try in the meantime?
Found the issue, I had to set the "--proxy-server" for Chrome to the proxy my VM was running behind. For some reason Firefox was working fine without that option so I didn't think to set this option for Chrome.
I'm having a problem with TensorFlow (CPU) on Ubuntu 14.04 (VM, droplet), where running a script is fast the first time, but when running the same (or another) script directly after completion of the first run, things become very slow.
I'm talking minutes instead of seconds. Even simple test scripts (like those provided in the tutorial) take forever, with no visible CPU load.
For comparison: first run of the test script from the tutorial gives:
{real:0m0.790s, user:0m0.688s, sys:0m0.111s}
Second run of the same script, directly after completion of the first run gives:
{real: 2m46.628s, user: 0m0.783s, sys: 0m0.104s}
Eventually, things seem to clear up and performance is back (only for one run though).
I narrowed the problem down to this:
sess=tf.Session()
takes very long. Apparently resources used by a previous Session are not properly released [?]. My scripts use the Context manager, like
with tf.Session() as sess:
sess.run(...)
My latest hypothesis is that this has to do with system properties (virtual machine settings, hypervisor issues interacting with the context manager of TF). Using the docker container of TF makes no difference. Rebooting didn't help either. The same scripts run OK on OS X.
To make sure it's obvious what happened and that this question is answered: This occurred because tensorflow was reading from /dev/random instead of /dev/urandom. On some systems, /dev/random can exhaust its supply of randomness and block until more is available, causing the slowdown. This has now been fixed in github. The fixes are included in release 0.6.0 and later.
I am having a really hard time figuring out as to why the selenium test cases are running slowly with phantomjs ghostdriver. When the developers run the test cases against the dev environment it runs faster(takes 1 hour to complete 5 test cases), but when ran from jenkins, it takes 4 hours.
I turned off the IPV6 on the dev machine, also tried switching to version 1.9.1, but still no improvement on the time taken.
Jenkins Machine
phantomJS: 1.9.2
Jenkins Server: RHEL 5.6 64 bit
JDK: 1.7
Developer Machine
OS: Windows 7 64 bit
JDK: 1.7
phantomJS: 1.9.2
Can someone please help?
Thanks in advance
Are you using driver.quit() or phantom.exit() after each test case as phantomJS process not get killed automatically. If no then it could be the reason of slowing down your test cases.
If your tests do not quit the drivers, your jenkins box will have a lot of open browsers in memory after a while.
For instance, every test that starts, then asserts false somewhere and dies, leaves an unclosed driver. These tend to build up after a while.
Depending on your testing framework.. there may be good solutions around it other than wrapping each test in their own try/catch blocks.
py.test has great fixture functionality. You can have a phantomjsdriver() fixture that opend the browser for each test, then after the test is done (safely, or aborting from a False assertion) it can finalize and close up the driver.
psudocode example: (py.test, python, selenium)
#pytest.fixture
def phantomdriver(request):
driver = webdriver.phantomJS()
def fin():
driver.close()
request.addfinalizer(fin)
In this situation, every test that uses this fixture, no mater how it exits, will end up calling the finilizer fin() to close it.
I am using Ruby on Rails under windows, installed with railsinstaller. Everything works fine, except that any command such as rails console or bundle exec rake db:migrate takes on average 8 seconds before executing. (rails s and rails -v are exceptions and take about 1 to 2 sec to launch, which is still abnormally high). I am not talking about the time of the entire command, just the time between when I hit enter and when I see the first output.
During this time, one core of my processor is working at 100%, and there is no load on the hard drive. I really feel like that I am waiting for some kind of timeout to expire, because I don't see why rails console should take that much processing power (I have a Core 2 Duo processor).
Do you have experienced this kind of problem ? What can it be ? How can I investigate this ?
It is spinning up your Rails environment, not just loading an executable. It is not Windows specific. It takes about 10 seconds on my Core2 Duo iMac. I've seen similar delays on Linux boxes. Here is an article that gives some hints that may help.
rails-3-osx-speed-up-console-loading-time