How to get around memory error with karma & phantomjs - phantomjs

We're running tests using karma and phantomjs Last week, our tests mysteriously started crashing phantomJS with an error of -1073741819.
Based on this thread for Chutzpah it appears that code indicates a native memory failure with PhantomJS.
Upon further investigation, we are consistently seeing phantom crash around 750MB of memory.
Is there a way to configure Karma so that it does not run up against this limit? Or a way to tell it to flush phantom?
We only have around 1200 tests so far. We're about 1/4 of the way through our project, so 5000 UI tests doesn't seem out of the question.

Thanks to the StackOverflow phenomenon of posting a question and quickly discovering an answer, we solved this by adding gulp tasks. Before we were just running karma start at the command line. This spun up a single instance of phantomjs that crashed when 750MB was reached.
Now we have a gulp command for each one of our sections of tests, e.g. gulp common-tests and gulp admin-tests and gulp customer-tests
Then a single gulp karma that runs each of those groupings. This allows each gulp command to have its own instance of phantom, and therefore stay underneath that threshold.

We ran into similar issue. Your approach is interesting and certainly side steps the issue. However, be prepared to face it again later.
I've done some investigation and found the cause of memory growth (at least in our case). Turns out when you use:
beforeEach(inject(SomeActualService)){ .... }
the memory taken up by SomeActualService does not get released at the end of the describe block and if you have multiple test files where you inject the same service (or other injectable objects) more memory will be allocated for it again.
I have a couple of ideas on how to avoid this:
1. create mock objects and never use inject to get real objects unless you are in the test that tests that module. This will require writing tons of extra code.
2. Create your own tracker (for tests only) for injectable objects. That way they can be loaded only once and reused between test files.
Forgot to mention: We are using angular 1.3.2, Jasmine 2.0 and hit this problem around 1000 tests.

I was also running into this issue after about 1037 tests on Windows 10 with PhantomJS 1.9.18.
It would appear as ERROR [launcher]: PhantomJS crashed. after the RAM for the process would exceed about 800-850 MB.
There appears to be a temporary fix here:
https://github.com/gskachkov/karma-phantomjs2-launcher
https://www.npmjs.com/package/karma-phantomjs2-launcher
You install it via npm install karma-phantomjs2-launcher --save-dev
But then need to use it in karma.conf.js via
config.set({
browsers: ['PhantomJS2'],
...
});
This seems to run the same set of tests while only using between 250-550 MB RAM and without crashing.
Note that this fix works out of the box on Windows and OS X, but not Linux (PhantomJS2 binaries won't start). This affects pushes to Travis CI.
To work around this issue on Debian/Ubuntu:
sudo apt-get install libicu52 libjpeg8 libfontconfig libwebp5

This is a problem with PhantomJS. According to another source, PhantomJS only runs the garbage collector when the page is closed, and this only happens after your tests run. Other browsers work fine because their garbage collectors work as expected.
After spending a few days on the issue, we concluded that the best solution was to split tests into groups. We had grunt create a profile for each directory dynamically and created a command that runs all those profiles. For all intents and purposes, it works just the same.

We had a similar issue on linux (ubuntu), that turned out to be the amount of memory segments that the process can manage:
$ cat /proc/sys/vm/max_map_count
65530
Then run this:
$ sudo bash -c 'echo 6553000 > /proc/sys/vm/max_map_count'
Note the number was multiplied by 100.
This will change the session settings. If it solves the problem, you can set it up for all future sessions:
$ sudo bash -c 'echo vm.max_map_count = 6553000 > /etc/sysctl.d/60-max_map_count.conf'

Responding to an old question, but hopefully this helps ...
I have a build process which a CI job runs in a command line only linux box. So, it seems that PhantomJS is my only option there. I have experienced this memory issue locally on my mac, but somehow it doesn't happen on the linux box. My solution was to add another test command to my package.json to run karma using Chrome, and run that locally to run my tests. When pushed up, Jenkins would kick off the regular test command, running PhantomJS.
Install this plugin: https://github.com/karma-runner/karma-chrome-launcher
Add this to package.json
"test": "karma start",
"test:chrome": "karma start --browsers Chrome"

Related

Lerna concurrency errors in Docker container

I have a typescript monorepo managed with Lerna.
The tests are done with Jest.
There are tens of packages that have the test script defined, while the jest config is stored in a central location and used by all.
An example test script looks like this:
jest --config ../../../tests/jest.config.json --setupFiles ../../../tests/jest-setup.js --rootDir .
Each package has a different number of "../" depending on its location in the source folder tree.
It works 100% of the time on multiple platforms like Windows, Linux and Mac.
For some reason when we run it inside a docker container, if the concurrency isn't set to 1 we see jest process from one package getting actually the paramters of another one which cause it to fail:
#cmd/example-package: > #cmd/example-package#0.0.0 test:integration
#cmd/example-package: > jest --config ../../tests/jest.config.json --setupFiles ../../tests/jest-setup.js --setupFilesAfterEnv ../../tests/setEnvVars.js --rootDir . --testPathPattern=./integration-tests --runInBand
#cmd/example-package: jest parameters: /src/cmd/example-package , --config,../../../tests/jest.config.json,--setupFiles,../../../tests/jest-setup.js,--setupFilesAfterEnv,../../../tests/setEnvVars.js,--rootDir,.,--testPathPattern=./integration-tests,--runInBand
The last line is printed from code added at the begining of the jest script which prints the current folder and the parameters passed. You can see that the parameters lerna reports passing to jest aren't the one which were actually used.
We saw such errors in the build process as well.
Any idea on how to solve it will be highly appriciated.
Tried multiple nodejs base images (alpine, node, Bullseye), multiple node versions and multiple lerna versions.
Even tried to switch from Lerna to Turborepo but still getting these concurrency errors
In our case, the problem was related to the installation of a different version of the npm on top of the alpine node docker image in one of the container's base images.
Leaving the question here, in case it helps someone.

The only way to start server sides (back end) is running them with a command line like "npm start"?

...is there like compiling the project and make it run to be autoexecutable?
Sorry for the general question. I have been doing little projects with server side and I find that always I need to write "npm start" or so to make the whole start working.
My doubt is, Do these projects need to be compiled somehow or it is just as is, a simple line runs the coded files and that works like a server side?
Also, should not a server side able to run by itself (by definition) when the system restarts? So far, I needed to create bat files/start folder in windows to make then run in case of restart.
According to NPM documentation:
npm start
This runs an arbitrary command specified in the package’s "start" property of its "scripts" object. If no "start" property is specified on the "scripts" object, it will run node server.js.
To start the server you have to start a process and that process is started by npm start. If processes are killed they cannot be brought back to life by themselves. If the process is killed (eg when you restart) you have to make sure a new process is automatically spawned. You can accomplish this in multiple ways. You could use services (for example systemctl in Debian). You could also use tools like Kubernetes which can automatically restart a container in case of a crash.
Another possbile solution to use something like Respawn which allows you to respawn a process if it crashes from NodeJS code. Of course, it can also be accomplished with plain NodeJS.

How can I make running individual tests faster with Jest in WebStorm?

WebStorm has a feature that lets you right-click on an it and run that test. I use it often in my workflow.
When I choose 'mocha' it runs like this and is basically instantaneous. Jest takes over 20 seconds presumably because it's scanning all my files to find a pattern match.
Is there any way to make this faster? There is no question that running all of our tests is faster when run through jest... but it's terrible for running individual tests like when you're debugging.
/usr/local/bin/node /Users/blake/Documents/git/handle/node_modules/mocha/bin/mocha --ui bdd --reporter /Applications/WebStorm.app/Contents/plugins/NodeJS/js/mocha-intellij/lib/mochaIntellijReporter.js /Users/blake/Documents/git/handle/lib/test/helpers/state-abbr-helper.spec.js --grep "^#state-abbr-helper fake test$"
this test did nothing at all...
/usr/local/bin/node --require /Applications/WebStorm.app/Contents/plugins/JavaScriptLanguage/helpers/jest-intellij/lib/jest-intellij-stdin-fix.js /Users/blake/Documents/git/handle/node_modules/jest/bin/jest.js --colors --reporters /Applications/WebStorm.app/Contents/plugins/JavaScriptLanguage/helpers/jest-intellij/lib/jest-intellij-reporter.js --verbose "--testNamePattern=^#state-abbr-helper fake test$" --runTestsByPath /Users/blake/Documents/git/handle/lib/test/helpers/state-abbr-helper.spec.js
console.log lib/test/helpers/state-abbr-helper.spec.js:7
this test did nothing at all...
Check your jest.config.js to see if it's doing anything heavy to start at the test such as something like:
setupFilesAfterEnv: ['<rootDir>/_ui/test/setupTest.js'],
In my case was the coverage slowing it down. I solved it by adding
--collectCoverage=false
to the run configuration, overriding the file configuration.

Selenium crashing in Docker due to Browsing context has been discarded

How do you run Selenium based tests inside Docker?
I'm trying to get some Python+Selenium tests, which use Firefox and Geckodriver, to run under an Ubuntu 18 Docker image.
My docker-compose.yml file is simply:
version: "3.5"
services:
app_test:
build:
context: .
shm_size: '4gb'
mem_limit: 4096MB
dockerfile: Dockerfile.test
Unfortunately, most tests are failing with errors like:
selenium.common.exceptions.NoSuchWindowException: Message: Browsing context has been discarded
The few search results I can find mentioning this error suggest it's because of low memory. The server I'm running the tests on has 8GB of total memory, although I also tested on a machine with 32GB and received the same error.
I also added a call to print the output of top before each test, and it's showing virtually no memory usage, so I'm not sure what would be causing the test to crash due to insufficient memory.
Some articles suggested adding the shm_size and mem_limit lines, but those had no effect.
I've also tried different versions of Firefox, from the most recent 71 version to the older ESR releases, to rule out it's not a bug due to incompatible versions of Firefox+Selenium+Geckodriver. I'm otherwise following this compatibility table.
What is causing this error and how do I fix it?
Root cause could be running out of RAM memory.
To fix it run the docker container adding --shm-size.
Example:
--shm-size="2G"

Protractor test times out randomly in Docker on Jenkins, works fine in Docker locally

When using the APIs defined by Protractor & Jasmine (the default/supported runner for Protractor), the tests will always work okay on individual developer laptops. For some reason when the test runs on the Jenkins CI server, they will fail (despite being in the same docker containers on both hosts, and that was wildly frustrating.)
This error occurs: A Jasmine spec timed out. Resetting the WebDriver Control Flow.
This error also appears: Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
Setting getPageTimeout & allScriptsTimeout to 30 seconds had no effect on this.
I tried changing jasmine.DEFAULT_TIMEOUT_INTERVAL to 60 seconds for all tests in this suite, once the first error appears then every test will wait the full 60 seconds and time out.
I've read and reread Protractor's page on timeouts but none of that seems relevant to this situation.
Even stranger still, it seems like some kind of buffer issue - at first the tests would always fail on a particular spec, and nothing about that spec looked wrong. While debugging I upgraded the selenium docker container from 2.53.1-beryllium to 3.4.0-einsteinium and the tests still failed but they failed a couple specs down - suggesting that maybe there was some optimization in the update and so it was able to get more done before it gave out.
I confirmed that by rearranging the order of the specs - the specs that had failed consistently before were now passing and a test that previously passed began to fail (but around the same time in the test duration as the other failures before the reorder.)
Environment:
protractor - 5.1.2
selenium/standalone-chrome-debug - 3.4.0-einsteinium
docker - 1.12.5
The solution ended up being simple - I first found it on a chrome bug report, and it turned out it was also listed right on the front page of the docker-selenium repo but the text wasn't clear as to what it was for when I'd read it the first time. (It says that selenium will crash without it, but the errors I was getting from Jasmine were just talking about timeouts, and that was quite misleading.)
Chrome apparently utilizes /dev/shm, and apparently that's fairly small in docker. There are workarounds for chrome and firefox linked from their README that explain how to resolve the issue.
I had a couple test suites fail after applying the fix but all the test suites have been running and passing for the last day, so I think that was actually the problem and that this solution works. Hope this helps!