In Go, how do you profile transitive init() cpu usage? - testing

I'm working on a project with a very large monorepo. When I run tests, the tests do not take very long to run the actual test case, but there is a lot of setup time before running the test.
I've tried go test -i not seeing much of a difference. I think that would suggest that the time is not a compilation time issue, so my next step is to profile everything that happens before running the test case.
There are many transitive dependencies and I would prefer to not manually look through the graph adding printlns to get the timings. Are there any tools to profile all the transitive initialization that happens in Go before running a test?

Related

IntelliJ, Cucumber, Java and Selenium - no tests are starting

First, important note: This is a project in which the Selenium/Cucumber test suite is working. It's working locally for the developers, and it's working in the Tekton environment.
Second: I have an identical IntelliJ environment as the developers. And other Selenium/Cucumber projects are running fine on my local machine.
When a try to run a fresh copy of the project in question locally, none of the tests start. And, there is no error message.
Normally, this will be cause by missing Glue parameters in the Run/Debug configuration, since IntelliJ for some reason often is unable to add these automatically. This is not the case here.
The test runner when I try to start a specific scenario:
Sorry for the language. The tests are all in Norwegian. But I don't think that matters for seeing the problem.
Normally, when there is an actual error regarding the test step - in the case "Gitt jeg er en vanlig søker" - there would be some info to the right of the steps overview. This is now blank, as can be seen. So, the test stes aren't even started.
The run/debug configuration:
e2e.cucumber.felles is the package where the Java file with the first step definition is located. So, that's not the reason.
Any ideas?

How to keep the setup for QuarkusIntegrationTest/Native image testing running to develop test efficently?

I just set up our first #QuarkusIntegrationTest. Now I want to write test against my api.
At the moment, I have to write a test, then run the whole lot (including start of dev services and test resources and the container image under test) to verify the test. This obviously takes quite a while and might be not most efficient way.
Is there a way to start up a #QuarkusIntegrationTest test environemnt, including dev services, QuarkusTestResource and container under test, and keep it running without automatically running the test, so I can then write and run my just-written test independently against the running environment?

How do I configure my unit tests to run automatically with Elm-Live?

How do I configure my unit tests to run automatically with Elm-Live?
I currently run elm-live as follows:
elm-live Home.elm --open --output=home.js
In addition to having automated compilations per modification of my web app, I would also like to ensure that I did not introduce breaking changes as well by having unit tests execute automatically after compiling.
Any suggestions?
You can use concurrently to run both processes in the same terminal instance.
The downside is that the stdout will probably not preserve the colors, so reading errors will be a little tricky.
concurrently 'elm-live Home.elm --open --output=home.js' 'elm-test --watch'
Example
I've made an example of this setup, check it out on GitHub.
UPD: I have updated the example to be Windows-compatible. Apparently, it should have escaped double quotes on the package.json instead of single quotes.

How to get around memory error with karma & phantomjs

We're running tests using karma and phantomjs Last week, our tests mysteriously started crashing phantomJS with an error of -1073741819.
Based on this thread for Chutzpah it appears that code indicates a native memory failure with PhantomJS.
Upon further investigation, we are consistently seeing phantom crash around 750MB of memory.
Is there a way to configure Karma so that it does not run up against this limit? Or a way to tell it to flush phantom?
We only have around 1200 tests so far. We're about 1/4 of the way through our project, so 5000 UI tests doesn't seem out of the question.
Thanks to the StackOverflow phenomenon of posting a question and quickly discovering an answer, we solved this by adding gulp tasks. Before we were just running karma start at the command line. This spun up a single instance of phantomjs that crashed when 750MB was reached.
Now we have a gulp command for each one of our sections of tests, e.g. gulp common-tests and gulp admin-tests and gulp customer-tests
Then a single gulp karma that runs each of those groupings. This allows each gulp command to have its own instance of phantom, and therefore stay underneath that threshold.
We ran into similar issue. Your approach is interesting and certainly side steps the issue. However, be prepared to face it again later.
I've done some investigation and found the cause of memory growth (at least in our case). Turns out when you use:
beforeEach(inject(SomeActualService)){ .... }
the memory taken up by SomeActualService does not get released at the end of the describe block and if you have multiple test files where you inject the same service (or other injectable objects) more memory will be allocated for it again.
I have a couple of ideas on how to avoid this:
1. create mock objects and never use inject to get real objects unless you are in the test that tests that module. This will require writing tons of extra code.
2. Create your own tracker (for tests only) for injectable objects. That way they can be loaded only once and reused between test files.
Forgot to mention: We are using angular 1.3.2, Jasmine 2.0 and hit this problem around 1000 tests.
I was also running into this issue after about 1037 tests on Windows 10 with PhantomJS 1.9.18.
It would appear as ERROR [launcher]: PhantomJS crashed. after the RAM for the process would exceed about 800-850 MB.
There appears to be a temporary fix here:
https://github.com/gskachkov/karma-phantomjs2-launcher
https://www.npmjs.com/package/karma-phantomjs2-launcher
You install it via npm install karma-phantomjs2-launcher --save-dev
But then need to use it in karma.conf.js via
config.set({
browsers: ['PhantomJS2'],
...
});
This seems to run the same set of tests while only using between 250-550 MB RAM and without crashing.
Note that this fix works out of the box on Windows and OS X, but not Linux (PhantomJS2 binaries won't start). This affects pushes to Travis CI.
To work around this issue on Debian/Ubuntu:
sudo apt-get install libicu52 libjpeg8 libfontconfig libwebp5
This is a problem with PhantomJS. According to another source, PhantomJS only runs the garbage collector when the page is closed, and this only happens after your tests run. Other browsers work fine because their garbage collectors work as expected.
After spending a few days on the issue, we concluded that the best solution was to split tests into groups. We had grunt create a profile for each directory dynamically and created a command that runs all those profiles. For all intents and purposes, it works just the same.
We had a similar issue on linux (ubuntu), that turned out to be the amount of memory segments that the process can manage:
$ cat /proc/sys/vm/max_map_count
65530
Then run this:
$ sudo bash -c 'echo 6553000 > /proc/sys/vm/max_map_count'
Note the number was multiplied by 100.
This will change the session settings. If it solves the problem, you can set it up for all future sessions:
$ sudo bash -c 'echo vm.max_map_count = 6553000 > /etc/sysctl.d/60-max_map_count.conf'
Responding to an old question, but hopefully this helps ...
I have a build process which a CI job runs in a command line only linux box. So, it seems that PhantomJS is my only option there. I have experienced this memory issue locally on my mac, but somehow it doesn't happen on the linux box. My solution was to add another test command to my package.json to run karma using Chrome, and run that locally to run my tests. When pushed up, Jenkins would kick off the regular test command, running PhantomJS.
Install this plugin: https://github.com/karma-runner/karma-chrome-launcher
Add this to package.json
"test": "karma start",
"test:chrome": "karma start --browsers Chrome"

Running single integration test quickly in Grails

Is it possible to quickly run single/all integration test in a class quickly in Grails. The test-app comes with heavy baggage of clearing of all compiled files and generating cobertura reports hence even if we run single integration test, the entire code base is compiled,instrumented and the cobertura report is getting generated. For our application this takes more than 2 minutes.
If it was possible to quickly run one integration test and get a rapid feedbck, it would be immensely helpful.
Also, is it important to clean up all the compiled files once the test is complete? This cleaning is fine if we run the entire set of integration test, but if we are going to run one or two tests in a class this cleaning and re-compiling seems to be a big bottleneck for quicker feedback to developers.
Thanks
If you have an integration test class
class SimpleControllerTests extends GrailsUnitTestCase {
public void testLogin() {}
public void testLogin2() {}
public void testLogin3() {}
}
You can run just one test in this class using:
grails test-app integration: SimpleController.testLogin
However, you will still have to incurr the time penalty required for integration testing (loading config, connecting to DB, instantiating Spring beans, etc.)
If you want your tests to run quickly, then try to write unit tests rather than integration tests.
It is the intention of the integration test to do this whole compiling, data base creation, server starting, etc. because the tests should run in an integrated environment, as the name implies.
Maybe you can extract some tests to unit tests. These you can run in Eclipse.
You can switch off Cobertura by placing the following code in your grails-app/conf/BuildConfig.groovy:
coverage {
enabledByDefault = false
}
Like you stated, the majority of time is setting up the application environment, injecting beans and doing the dynamic class annotations. You can speed up your integration test cycle by only loading this once, by running your tests in the grails REPL.
However, the tradeoff is that there are dynamic reloading issues in the REPL. If you see random weirdness, exit the REPL and reload.
$> ./grailsw --plain-output
|Loading Grails 2.5.3
|Configuring classpath
|Enter a script name to run. Use TAB for completion:
grails> test-app -integration
... (loads some things)
...
grails> test-app -integration
... (faster loading)
And to reply to the other commenters - integration tests are useful as well, there is some code that cannot be tested with a unit test (for instance, testing HQL or SQL queries).