Garbled test result output from meteortesting:mocha - testing

The recommended testing framework for Meteor 1.7 seems to be meteortesting:mocha.
With Meteor 1.7.0.3 I created a default app (meteor create my-app), which has the following tests (in test/main.js)
import assert from "assert";
describe("my-app", function () {
it("package.json has correct name", async function () {
const { name } = await import("../package.json");
assert.strictEqual(name, "noteit");
});
if (Meteor.isClient) {
it("client is not server", function () {
assert.strictEqual(Meteor.isServer, false);
});
}
if (Meteor.isServer) {
it("server is not client", function () {
assert.strictEqual(Meteor.isClient, false);
});
}
});
I ran
meteor add meteortesting:mocha
meteor test --driver-package meteortesting:mocha
and with meteortesting:mocha#2.4.5_6 I got this in the console:
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)? ----- RUNNING SERVER TESTS -----
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)?
I20180728-12:06:37.730(2)?
I20180728-12:06:37.731(2)?
I20180728-12:06:37.737(2)? the server
✓ fails a test.753(2)?
I20180728-12:06:37.755(2)?
I20180728-12:06:37.756(2)?
I20180728-12:06:37.756(2)? 1 passing (26ms)
I20180728-12:06:37.756(2)?
I20180728-12:06:37.757(2)? Load the app in a browser to run client tests, or set the TEST_BROWSER_DRIVER environment variable. See https://github.com/meteortesting/meteor-mocha/blob/master/README.md#run-app-tests
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
Actually, it was repeated three times. Not pretty. And I wasn't expecting a passing test to crash my app.
Also in the browser I got this
I was expecting something more like the nice output, as per the Meteor testing guide:

As with most things Node.js, there are a multitude of forks of almost anything. So also with meteortesting:mocha.
cultofcoders:mocha seems to be a few commits ahead of practicalmeteor:mocha, which was at one point the recommended testing framework for Meteor.
If you run
meteor add cultofcoders:mocha
meteor test --driver-package cultofcoders:mocha
you'll get the nice output.
As a curiousity, I found that the version of cultofcoders:mocha I got (meteor list | grep mocha) was 2.4.6, a version that the github repo does not have...

The screenshot, you reference to, is made using practicalmeteor:mocha, but meteortesting:mocha is not (as the other answer claims) a fork of it but a separately developed package, aiming for the same goal, which is running of tests in Meteor.
The usage of the packages is very different and practicalmeteor:mocha might look a bit trickier to set up and this list only applies to it's version 1.0.1 and might change later.
But I have to admit that the documentation needs a refresh ... Anyways, here are some helpful tipps which I'll include in the documentation soon.
If you just want to get started, run this:
meteor add meteortesting:mocha
npm i --save-dev puppeteer#^1.5.0
TEST_BROWSER_DRIVER=puppeteer meteor test --driver-package meteortesting:mocha --raw-logs --once
Do you want to exit after the tests are completed or re-run them after file-change?
Usually, Meteor will restart your application when it exits (a normal exit or a crash), which includes the test-runner.
In case you want to use it in one of your CI or you just want to run the tests once, add --once to the meteor-command, otherwise set TEST_WATCH=1 before running this script. If you don't set the env variable, and don't define --once, Meteor will print these lines and restart the tests once they're finished:
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
As of now I haven't found a way to check if the flag --once is set, which would omit the env variable. The flexibility here to choose between CI and continuous testing is very useful.
Maybe you're currently working on a feature and want to run the tests as you work. If you have set TEST_WATCH=1 and are not using --once, Meteor will restart the tests once it registers that a file was changed. You can even limit the test collection using MOCHA_GREP.
Where and how do you want to see the results?
You currently have to choose between seeing all the test-results on the command-line or to show the server-tests in the commandline and the client-tests in the browser. Currently practicalmeteor:mocha does not support showing the result of the server- and client-tests in the browser, as your screenshot shows.
Please take a look at the package documentation for further details:
You should disable the Meteor timestamp to make it look better.
Tests might look quite gambled because of the timestamp added to every line. To avoid this, add --raw-logs to your command.
I hope this answers most of your question. I know that the documentation needs some improvements and would welcome if someone would take the time to take it into a more logical order for people who "just want to get started".

Related

Ballerinalang configuration

I was trying to run a ballerina program on IntelliJ Idea. Then, Edit configuration appears and it says
Error: Main run kind is selected, but the file does not contain a main function.
What should I do ? And what should I select in Program Arguments.
source code:
import ballerina.net.http;
import ballerina.lang.messages;
#http:BasePath {value:"/helloservice"}
service helloService {
#http:GET {}
#http:PATH {value:"/hello?name={name}"}
resource hello (message m, #http:QueryParam {value:"name"} string name) {
string respStr = "Hello, World " + name + "!\n";
message responce = {};
messages:setStringPayload(response, respStr);
reply response;
}
}
Seems like the issue here is, you have manually created a main run configuration and trying to run a service using that. Please select the service run kind in configuration as shown below.
Also, you don't have to manually create a run configurations. IntelliJ IDEA plugin can automatically detect the run kind when you run a main function or services using gutter run icon like below.
Run configuration is automatically created.
If you first run a main and then run a service, the run configuration will be automatically changed according to that as well. So no manual intervention is needed.
On the side notes, the code seems to have much older Ballerina syntax and I would advice using the latest Ballerina syntax to avoid any issues with the latest IntelliJ IDEA plugin. Please refer Ballerina examples for latest syntax.

Protractor flakiness

I maintain a complex Angular (1.5.x) application that is being E2E tested using Protractor (2.5.x). I am experiencing a problem with this approach, which presents primarily in the way the tests seem flaky. Tests that worked perfectly well in one pull request fail in another. This concerns simple locators, such as by.linkTest(...). I debugged the failing tests and the app is on the correct page, the links are present and accessible.
Has anyone else experienced these consistency problems? Knows of a cause or workaround?
Just Say No to More End-to-End Tests!
That said, here are the few things you can do to tackle our mutual merciless "flakiness" enemy:
update to the latest Protractor (currently 4.0.0) which also brings latest selenium and chromedriver with it
turn off Angular animations
use dragons browser.wait() with a set of built-in or custom Expected Conditions. This is probably by far the most reliable way to approach the problem. Unfortunately, this is use-case and problem specific, you would need to modify your actual tests in the problematic places. For example, if you need to click an element, wait for it to be clickable:
var EC = protractor.ExpectedConditions;
var elm = $("#myid");
browser.wait(EC.elementToBeClickable(elm), 5000);
elm.click();
maximize the browser window (to avoid random element not visible or not clickable errors). Put this to onPrepare():
browser.driver.manage().window().maximize();
increase the Protractor and Jasmine timeouts
slow Protractor down by tweaking the Control Flow (not sure if it works for 4.0.0, please test)
manually call browser.waitForAngular(); in problematic places. I am not sure why this helps but I've seen reports where it definitely helped to fix a flaky test.
use the jasmine done() callback in your specs. This may help to, for example, not to start the it() block until done is called in beforeEach()
return a promise from the onPrepare() function. This usually helps to make sure things are prepared for the test run
use protractor-flake package that would automatically re-run failed tests. More like a quick workaround to the problem
There are also other problem-specific "tricks" like slow typing into the text box, clicking via JavaScript etc.
Yes, I think all of us experienced such flakiness issue.
Actually, the flakiness is quite common issue with any browser automation tool. However, this is supposed to be less in case of Protractor as Protractor has built-in wait consideration which performs actions only after loading the dom properly. But, in few cases you might have to use some explicit waits if you see intermittent failures.
I prefer to use few intelligent wait methods like:
function waitForElementToClickable(locator) {
var domElement = element(by.css(locator)),
isClickable = protractor.ExpectedConditions.elementToBeClickable(domElement);
return browser.wait(isClickable, 2000)
.then(function () {
return domElement;
});
}
Where 2000 ms is used as timeout, you can make it configurable using a variable.Sometimes I also go with browser.sleep() when none of my intelligent wait works.
It's been my experience that some methods (eg. sendKeys()) do not always fire at the expected time, within the controlFlow() queue, and will cause tests to be flakey. I work around this by specifically adding them to the controlFlow(). Eg:
this.enterText = function(input, text) {
return browser.controlFlow().execute(function() {
input.sendKeys(text);
});
};
A workaround that my team has been using is to re-run only failed tests using the plugin protractor-errors. Using this tool, we can identify real failures versus flakey tests within 2-3 runs. To add the plugin, just add a require statement to the bottom of the Protractor config's onPrepare function:
exports.config = {
...
onPrepare: function() {
require('protractor-errors');
}
}
You will need to pass these additional parameters when to run your tests with the plugin:
protractor config.js --params.errorsPath 'jasmineReports' --params.currentTime (timestamp) --params.errorRun (true or false)
There is also a cli tool that will handle generating the currentTime if you don't have an easy way to pass in a timestamp.

PhantomJs Crashes while running with grunt-karma test cases ????/

We are facing an issue while running karma test cases with phantomJs our phantomJs crashes and gets disconnected.
Is that due to memory leakage or some other issue.Kindly let me know if some one has some suitable solution.
I found that the workaround is to break test cases into multiple grunt task but since we have a lot of test cases more than 1500 so that would not be a feasible task.
We are using the below versions
Node:- 0.10.32
Karma:- 0.12.24
PhantomJs:- 1.9.8 (karma-phantomJs-Launcher)
Please let me know the solutions asap.
There are two reasons I found that this can happen.
PhantomJS does not release memory until its tab is closed so if your test suite is too large, you could be running out of memory.
karma-phantomjs-launcher & karma-phantomjs2-launcher do not hook the stdout/stderr output for their started browser process and so I've seen some instances that the started browser just hangs and gets disconnected, most likely due to its stderr output getting filled up
The first problem can be worked around by splitting your test suite into smaller ones. Or, you could research if there is perhaps a way to tell PhantomJS to run its JavaScript garbage collection, but I have not gone down that road so can't provide much more detail there.
The second problem can be fixed by:
using the latest karma-phantomjs-launcher version that hooks browser the stdout/stderr output (fixed in version 0.2.1)
using a version of karma-phantomjs2-launcher from its pull request #5 which brings in upstream changes from the base karma-phantomJS-launcher project and thus resolves the problem here as well.
I had the same kind of issue with handling random crashes. Though i did not find a way to avoid them, there is the possibility to restart the grunt-task upon a crash.
grunt.registerTask('karma-with-retry', function (opt) {
var done = this.async();
var count = 0;
var retry = function () {
grunt.util.spawn({
cmd : "grunt",
args : ["connect", "karma"], // your tasks
opts: {
stdio: 'inherit'
}
}, function (error, result, code) {
count++;
if (error && code === 90 /* Replace with code thrown by karma */) {
if(count < 5) {
grunt.log.writeln("Retrying karma tests upon error: " + code );
retry();
} else {
done(false);
}
} else {
done(result);
}
});
}
retry();
});
Source https://github.com/ariya/phantomjs/issues/12325#issuecomment-56246505
I was getting Phantom crashed when asserting the following line
dom.should.be.instanceof(HTMLCollection);
Worked on chrome, but phantom was crashing without any useful error message.
I've been able to see the real error message after running the same test on PhantomJS_debug browser with debug option set to true.
The following error message started showing up.
The instanceof assertion needs a constructor but object was given.
Instead of
PhantomJS has crashed. Please read the bug reporting guide at
<http://phantomjs.org/bug-reporting.html> and file a bug report.
So chrome was ok with the assertion but phantom 2.1.1 is crashing with the above error. Hope this will help.

Protractor tests results monitoring?

I was wondering if there is a service where I could send data and then group that data however I want and later display it in charts or similar?
What I want to do is - start protractor e2e tests and let them run forever. Since some of the tests randomly fail I can not rely on them if they were included in main build. What I'm going to do is run tests in each supported browser over and over again and then capture which tests failed in which browser, if they passed - reset failure counter and so on.. That way I can monitor how my system is doing without wasting my time waiting for build to run successfully.
My data could look something like that:
{
testName: 'Path to test + name',
status: 'failed/passed',
browser: 'firefox/chrome/safari/opera...'
}
Now when test fails in (let's say) chrome over and over again I will see that it's error counter is constantly increasing, that means something is either wrong with system or I need to fix test in chrome.
Does anyone know service that could consume my data and display aggregated info?

CI with emberjs

I am currently researching ways to integrate a testsuite for an application based on ember.js into travis-ci. So first off, we're not on the open-source service, we use it for private repositories, etc..
I looked at how several open-source projects run their ember.js test suite and it looks like they set up a server with their project which probably gets updated whenever someone pushes to the repository. Then PhantomJS is used to run the tests on that server (and actually not on travis-ci itself).
The problem I have with this approach is that this adds another step (and ultimately complexity): I have to update and maintain a server with the latest code so I can use PhantomJS to run the test suite.
Another drawback is that I don't see how it would enable us to test PRs (pull-requests) either. The server would have to be updated with code from the PR. Testing PRs before they are merge is one of the great things about travis-ci.
I couldn't find much/anything about running ember.js tests only through the CLI – I am hoping someone tackled this issue before me.
I can't speak to your questions about travis-ci ... but I can offer some thoughts about unit testing ember.js code with jasmine.
Before I started using ember.js I was unit testing with jasmine and a simple node.js module called jasmine-node. This allowed me to quickly run a suite of jasmine unit tests from the command line without having to open a browser or hack around with "js-test runner" / etc
That worked great when I had jasmine, jquery and simple javascript modules I used to keep my javascript code human readable. But the moment I needed to use ember/handlebars/etc the jasmine-node module fell down because it expects you have everything available on both global and window. But because ember is just a browser library not everything was on "global"
I started looking at PhantomJS and like yourself couldn't see myself adding the complexity. So instead of hacking around this I decided to take a weekend and write what was missing from the jasmine test runner space. I wanted the same power of jasmine-node (meaning all I would need on my CI box was a recent version of node.js and a simple npm module to run the tests)
I wrote a npm module called jasmine-phantom-node and at the core it's using node.js to run phantomJS => that in turn fires up a regular jasmine html runner and scrapes the page for test results using a very basic express web app.
I spent the time to put 2 different examples in the github project so others could see how it works quickly. It's opinionated so you will need an html file in your project root that will be used by the plugin to execute your tests. It also requires jasmine, and jasmine-html along with a recent jQuery.
It solved this issue for me personally and now I can write tests against ember using simple jasmine and run it from the cmd line without a browser.
Here is a sample jasmine unit test that I wrote against an ember view recently while spiking around with this test runner. Here is a link to the full ember / django project if you want to see how the view under test is used in the app.
require('static/script/vendor/filtersortpage.js');
require('static/script/app/person.js');
describe ("PersonApp.PersonView Tests", function(){
var sut, router, controller;
beforeEach(function(){
sut = PersonApp.PersonView.create();
router = new Object({send:function(){}});
controller = PersonApp.PersonController.create({});
controller.set("target", router);
sut.set("controller", controller);
});
it ("does not invoke send on router when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).not.toHaveBeenCalledWith('addPerson', jasmine.any(String));
});
it ("invokes send on router with username when exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).toHaveBeenCalledWith('addPerson', 'foo');
});
it ("does not invoke set context when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).not.toHaveBeenCalledWith('username', jasmine.any(String));
});
it ("invokes set context to empty string when username exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).toHaveBeenCalledWith('username', '');
});
});
Here is the production ember view that I'm unit testing above
PersonApp.PersonView = Ember.View.extend({
templateName: 'person',
addPerson: function(event) {
var username = event.context.username;
if (username) {
this.get('controller.target').send('addPerson', username);
event.context.set('username', '');
}
}
});