How do I configure my unit tests to run automatically with Elm-Live? - elm

How do I configure my unit tests to run automatically with Elm-Live?
I currently run elm-live as follows:
elm-live Home.elm --open --output=home.js
In addition to having automated compilations per modification of my web app, I would also like to ensure that I did not introduce breaking changes as well by having unit tests execute automatically after compiling.
Any suggestions?

You can use concurrently to run both processes in the same terminal instance.
The downside is that the stdout will probably not preserve the colors, so reading errors will be a little tricky.
concurrently 'elm-live Home.elm --open --output=home.js' 'elm-test --watch'
Example
I've made an example of this setup, check it out on GitHub.
UPD: I have updated the example to be Windows-compatible. Apparently, it should have escaped double quotes on the package.json instead of single quotes.

Related

Limiting run to only one file does not work

I just installed Cypress and was test running it.
Running npm run cy-run will run all test files which takes quite a lot of time and can become confusing.
Note that I have not added a single test of mine. The tests are the default examples coming from Cypress installation.
When attempting to limit to a single file I found several sources - including this question - that all seem to agree that the following would limit the run to just one single file:
npm run cy-run --spec cypress/integration/2-advanced-examples/viewport.spec.js
But Cypress does not care and goes on to pick up all tests and run them:
Instead of trying to run this from the command line, rather just - while writing and running your tests - prefix the only chain to it.
Example, change this:
it("should do stuff", () => ...);
to this:
it.only("should do stuff", () => ...);
You can add this to describe.only as well if you want to run a whole suite - or in your case, file - alone.
Another Option:
If you'd like to only run tests that you've written, you can either just remove all those example files or change describe to xdescribe or it to xit and cypress will skip running those specified tests.
Command Line Solution:
You're missing --, add that in and it should work as per your solution.
It should be written like this:
npm run cy-run -- --spec cypress/integration/2-advanced-examples/viewport.spec.js

Is it possible to run all the tests using mocha even if there are some failures?

I am working on a node.js application and would like to know if there is a way I can run all the unit tests from all the sub modules even if there are some test failures to know how many tests are failing in total to start putting the fixes for them. We use mocha for our tests on the back-end and jest for the ui.
Thanks.
The default behavior for mocha is to run all the tests. If it is exiting after the first test failure, that would suggest that you are using the "bail" option typically enabled on the command line with either --bail or -b.
Relevant docs: https://mochajs.org/#-bail-b
It can also be caused by passing the option { bail: true } to mocha.setup(). Look in your test runner and in your package.json.
Lastly, the least likely of these possibilities is that it could also be caused by using this.bail() somewhere in the Mocha test runner.

How can I make running individual tests faster with Jest in WebStorm?

WebStorm has a feature that lets you right-click on an it and run that test. I use it often in my workflow.
When I choose 'mocha' it runs like this and is basically instantaneous. Jest takes over 20 seconds presumably because it's scanning all my files to find a pattern match.
Is there any way to make this faster? There is no question that running all of our tests is faster when run through jest... but it's terrible for running individual tests like when you're debugging.
/usr/local/bin/node /Users/blake/Documents/git/handle/node_modules/mocha/bin/mocha --ui bdd --reporter /Applications/WebStorm.app/Contents/plugins/NodeJS/js/mocha-intellij/lib/mochaIntellijReporter.js /Users/blake/Documents/git/handle/lib/test/helpers/state-abbr-helper.spec.js --grep "^#state-abbr-helper fake test$"
this test did nothing at all...
/usr/local/bin/node --require /Applications/WebStorm.app/Contents/plugins/JavaScriptLanguage/helpers/jest-intellij/lib/jest-intellij-stdin-fix.js /Users/blake/Documents/git/handle/node_modules/jest/bin/jest.js --colors --reporters /Applications/WebStorm.app/Contents/plugins/JavaScriptLanguage/helpers/jest-intellij/lib/jest-intellij-reporter.js --verbose "--testNamePattern=^#state-abbr-helper fake test$" --runTestsByPath /Users/blake/Documents/git/handle/lib/test/helpers/state-abbr-helper.spec.js
console.log lib/test/helpers/state-abbr-helper.spec.js:7
this test did nothing at all...
Check your jest.config.js to see if it's doing anything heavy to start at the test such as something like:
setupFilesAfterEnv: ['<rootDir>/_ui/test/setupTest.js'],
In my case was the coverage slowing it down. I solved it by adding
--collectCoverage=false
to the run configuration, overriding the file configuration.

How to get around memory error with karma & phantomjs

We're running tests using karma and phantomjs Last week, our tests mysteriously started crashing phantomJS with an error of -1073741819.
Based on this thread for Chutzpah it appears that code indicates a native memory failure with PhantomJS.
Upon further investigation, we are consistently seeing phantom crash around 750MB of memory.
Is there a way to configure Karma so that it does not run up against this limit? Or a way to tell it to flush phantom?
We only have around 1200 tests so far. We're about 1/4 of the way through our project, so 5000 UI tests doesn't seem out of the question.
Thanks to the StackOverflow phenomenon of posting a question and quickly discovering an answer, we solved this by adding gulp tasks. Before we were just running karma start at the command line. This spun up a single instance of phantomjs that crashed when 750MB was reached.
Now we have a gulp command for each one of our sections of tests, e.g. gulp common-tests and gulp admin-tests and gulp customer-tests
Then a single gulp karma that runs each of those groupings. This allows each gulp command to have its own instance of phantom, and therefore stay underneath that threshold.
We ran into similar issue. Your approach is interesting and certainly side steps the issue. However, be prepared to face it again later.
I've done some investigation and found the cause of memory growth (at least in our case). Turns out when you use:
beforeEach(inject(SomeActualService)){ .... }
the memory taken up by SomeActualService does not get released at the end of the describe block and if you have multiple test files where you inject the same service (or other injectable objects) more memory will be allocated for it again.
I have a couple of ideas on how to avoid this:
1. create mock objects and never use inject to get real objects unless you are in the test that tests that module. This will require writing tons of extra code.
2. Create your own tracker (for tests only) for injectable objects. That way they can be loaded only once and reused between test files.
Forgot to mention: We are using angular 1.3.2, Jasmine 2.0 and hit this problem around 1000 tests.
I was also running into this issue after about 1037 tests on Windows 10 with PhantomJS 1.9.18.
It would appear as ERROR [launcher]: PhantomJS crashed. after the RAM for the process would exceed about 800-850 MB.
There appears to be a temporary fix here:
https://github.com/gskachkov/karma-phantomjs2-launcher
https://www.npmjs.com/package/karma-phantomjs2-launcher
You install it via npm install karma-phantomjs2-launcher --save-dev
But then need to use it in karma.conf.js via
config.set({
browsers: ['PhantomJS2'],
...
});
This seems to run the same set of tests while only using between 250-550 MB RAM and without crashing.
Note that this fix works out of the box on Windows and OS X, but not Linux (PhantomJS2 binaries won't start). This affects pushes to Travis CI.
To work around this issue on Debian/Ubuntu:
sudo apt-get install libicu52 libjpeg8 libfontconfig libwebp5
This is a problem with PhantomJS. According to another source, PhantomJS only runs the garbage collector when the page is closed, and this only happens after your tests run. Other browsers work fine because their garbage collectors work as expected.
After spending a few days on the issue, we concluded that the best solution was to split tests into groups. We had grunt create a profile for each directory dynamically and created a command that runs all those profiles. For all intents and purposes, it works just the same.
We had a similar issue on linux (ubuntu), that turned out to be the amount of memory segments that the process can manage:
$ cat /proc/sys/vm/max_map_count
65530
Then run this:
$ sudo bash -c 'echo 6553000 > /proc/sys/vm/max_map_count'
Note the number was multiplied by 100.
This will change the session settings. If it solves the problem, you can set it up for all future sessions:
$ sudo bash -c 'echo vm.max_map_count = 6553000 > /etc/sysctl.d/60-max_map_count.conf'
Responding to an old question, but hopefully this helps ...
I have a build process which a CI job runs in a command line only linux box. So, it seems that PhantomJS is my only option there. I have experienced this memory issue locally on my mac, but somehow it doesn't happen on the linux box. My solution was to add another test command to my package.json to run karma using Chrome, and run that locally to run my tests. When pushed up, Jenkins would kick off the regular test command, running PhantomJS.
Install this plugin: https://github.com/karma-runner/karma-chrome-launcher
Add this to package.json
"test": "karma start",
"test:chrome": "karma start --browsers Chrome"

Running a set of actions before every test-file in mocha

I've started recently working with mocha to test my expressjs server.
My tests are separated to multiple files and most of them contain some duplicated segments (Mostly before statements that load all the fixtures to the DB, etc) and that's really annoying.
I guess I could export them all to a single file and import them on each and every test, but I wonder if there are some more elegant solutions - such as running a certain file that has all the setup commands , and another file that contains all the tear-down commands.
If anyone knows the answer that would be awesome :)
There are three basic levels of factoring out common functionality for mocha tests. If you want to load in some fixtures once for a bunch of tests (and you've written each test to be independent), use the before function to load the fixtures at the top of the file. You can also use beforeEach function if you need the fixtures re-initialized each time.
The second option (which is related), is to pull out common functionality into a separate file or set of files and require that file.
Finally, note that mocha has a root level hook:
You may also pick any file and add "root"-level hooks. For example, add beforeEach() outside of all describe() blocks. This will cause the callback to beforeEach() to run before any test case, regardless of the file it lives in (this is because Mocha has an implied describe() block, called the "root suite").
We use that to start an Express server once (and we use an environment variable so that it runs on a different port than our development server):
before(function () {
process.env.NODE_ENV = 'test';
require('../../app.js');
});
(We don't need a done() here because require is synchronous.) This was, the server is started exactly once, no matter how many different test files include this root-level before function.
The advantage of splitting things up in this way is that we can run npm test which runs all tests, or run mocha on any specific file or any specific folder, or any specific test or set of tests (using it.only and describe.only) and all of the prerequisites for the selected tests will run.
Why not mocha -r <module> or even using mocha.opts?
Have your common file in, say, init.js and then run
mocha -r `./init`
Which will cause mocha to run and load ./init.js before loading any of the mocha test files.
You could also put it in mocha.opts inside your tests directory, and have its contents be
--require ./init