I am currently researching ways to integrate a testsuite for an application based on ember.js into travis-ci. So first off, we're not on the open-source service, we use it for private repositories, etc..
I looked at how several open-source projects run their ember.js test suite and it looks like they set up a server with their project which probably gets updated whenever someone pushes to the repository. Then PhantomJS is used to run the tests on that server (and actually not on travis-ci itself).
The problem I have with this approach is that this adds another step (and ultimately complexity): I have to update and maintain a server with the latest code so I can use PhantomJS to run the test suite.
Another drawback is that I don't see how it would enable us to test PRs (pull-requests) either. The server would have to be updated with code from the PR. Testing PRs before they are merge is one of the great things about travis-ci.
I couldn't find much/anything about running ember.js tests only through the CLI – I am hoping someone tackled this issue before me.
I can't speak to your questions about travis-ci ... but I can offer some thoughts about unit testing ember.js code with jasmine.
Before I started using ember.js I was unit testing with jasmine and a simple node.js module called jasmine-node. This allowed me to quickly run a suite of jasmine unit tests from the command line without having to open a browser or hack around with "js-test runner" / etc
That worked great when I had jasmine, jquery and simple javascript modules I used to keep my javascript code human readable. But the moment I needed to use ember/handlebars/etc the jasmine-node module fell down because it expects you have everything available on both global and window. But because ember is just a browser library not everything was on "global"
I started looking at PhantomJS and like yourself couldn't see myself adding the complexity. So instead of hacking around this I decided to take a weekend and write what was missing from the jasmine test runner space. I wanted the same power of jasmine-node (meaning all I would need on my CI box was a recent version of node.js and a simple npm module to run the tests)
I wrote a npm module called jasmine-phantom-node and at the core it's using node.js to run phantomJS => that in turn fires up a regular jasmine html runner and scrapes the page for test results using a very basic express web app.
I spent the time to put 2 different examples in the github project so others could see how it works quickly. It's opinionated so you will need an html file in your project root that will be used by the plugin to execute your tests. It also requires jasmine, and jasmine-html along with a recent jQuery.
It solved this issue for me personally and now I can write tests against ember using simple jasmine and run it from the cmd line without a browser.
Here is a sample jasmine unit test that I wrote against an ember view recently while spiking around with this test runner. Here is a link to the full ember / django project if you want to see how the view under test is used in the app.
require('static/script/vendor/filtersortpage.js');
require('static/script/app/person.js');
describe ("PersonApp.PersonView Tests", function(){
var sut, router, controller;
beforeEach(function(){
sut = PersonApp.PersonView.create();
router = new Object({send:function(){}});
controller = PersonApp.PersonController.create({});
controller.set("target", router);
sut.set("controller", controller);
});
it ("does not invoke send on router when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).not.toHaveBeenCalledWith('addPerson', jasmine.any(String));
});
it ("invokes send on router with username when exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).toHaveBeenCalledWith('addPerson', 'foo');
});
it ("does not invoke set context when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).not.toHaveBeenCalledWith('username', jasmine.any(String));
});
it ("invokes set context to empty string when username exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).toHaveBeenCalledWith('username', '');
});
});
Here is the production ember view that I'm unit testing above
PersonApp.PersonView = Ember.View.extend({
templateName: 'person',
addPerson: function(event) {
var username = event.context.username;
if (username) {
this.get('controller.target').send('addPerson', username);
event.context.set('username', '');
}
}
});
Related
According to this article, Selenium 4 alpha has a sendDevToolsCommand that sends an arbitrary DevTools command to the browser and returns a promise that will be resolved when the command has finished:
Added “sendDevToolsCommand” and “setDownloadPath” for chrome.Driver.
But I can't seem to find how to use it. It sounds a bit like using JavaScript executor in Selenium.
Can someone provide an example usage? I'm using Selenium + Java.
The command to call the devtool api was added a few years back in the Chrome driver.
You can already use it with Selenium even if the method is not yet present:
Take full page screenshot
Print To PDF
Inject some Javascript before the page loads
Block a network URL
Save/restore the cookies for all domains
Get transparent screenshot
This command gives you access to the devtools api, which is used by ChromeDriver internally to drive the browser.
The method takes the name of the command as first argument and a dictionary of parameters as second argument. To figure out how to call a command, add puppeteer in your searches. For instance puppeteer set download location.
Note that executeCdpCommand is implemented in the Java master branch, so it should be available in the next release.
I couldn't find the sendDevToolsCommand in the Selenium documentation yet, but the source actually has the setDownloadPath that you also mentioned above defined right below, which actually uses the sendDevToolsCommand. Based on that usage, it seems like you could do something like:
const { Builder } = require("selenium-webdriver");
const driverInstance = await new Builder()
.withCapabilities({ browserName: "chrome" })
.build();
driverInstance.sendDevToolsCommand('Page.setDownloadBehavior', {
behavior: 'allow',
downloadPath: path
})
or for a visually obvious example:
await driverInstance.sendDevToolsCommand("Emulation.setDefaultBackgroundColorOverride", {
color: { r: 0, g: 255, b: 0, a: 1 } // watch out, it's bright!
});
where the first argument is a Chrome Devtools Protocol Domain method (e.g. or Page.setDownloadBehavior or Emulation.setCPUThrottlingRate) and the second argument is an object containing the options for that Domain method (as described in the same protocol docs).
Edit: just tested and the above works :)
I'm excited that this was added because it means that, in addition to network throttling, it should be pretty trivial to add cpu throttling to Selenium tests now! Something like:
driverInstance.sendDevToolsCommand('Emulation.setCPUThrottlingRate', {
rate: 4 // throttle cpu 4x
}
Selenium 4 release will have a user friendly API for Chrome DevTools protocol.
I just finished implementing Network and Performance domains for the Selenium Java client.
https://github.com/SeleniumHQ/selenium/pull/7212
In addition, there is a generic API for all domains in Java client that was merged a while ago.
All those new features will be released probably in the next Alpha release.
The recommended testing framework for Meteor 1.7 seems to be meteortesting:mocha.
With Meteor 1.7.0.3 I created a default app (meteor create my-app), which has the following tests (in test/main.js)
import assert from "assert";
describe("my-app", function () {
it("package.json has correct name", async function () {
const { name } = await import("../package.json");
assert.strictEqual(name, "noteit");
});
if (Meteor.isClient) {
it("client is not server", function () {
assert.strictEqual(Meteor.isServer, false);
});
}
if (Meteor.isServer) {
it("server is not client", function () {
assert.strictEqual(Meteor.isClient, false);
});
}
});
I ran
meteor add meteortesting:mocha
meteor test --driver-package meteortesting:mocha
and with meteortesting:mocha#2.4.5_6 I got this in the console:
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)? ----- RUNNING SERVER TESTS -----
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)?
I20180728-12:06:37.730(2)?
I20180728-12:06:37.731(2)?
I20180728-12:06:37.737(2)? the server
✓ fails a test.753(2)?
I20180728-12:06:37.755(2)?
I20180728-12:06:37.756(2)?
I20180728-12:06:37.756(2)? 1 passing (26ms)
I20180728-12:06:37.756(2)?
I20180728-12:06:37.757(2)? Load the app in a browser to run client tests, or set the TEST_BROWSER_DRIVER environment variable. See https://github.com/meteortesting/meteor-mocha/blob/master/README.md#run-app-tests
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
Actually, it was repeated three times. Not pretty. And I wasn't expecting a passing test to crash my app.
Also in the browser I got this
I was expecting something more like the nice output, as per the Meteor testing guide:
As with most things Node.js, there are a multitude of forks of almost anything. So also with meteortesting:mocha.
cultofcoders:mocha seems to be a few commits ahead of practicalmeteor:mocha, which was at one point the recommended testing framework for Meteor.
If you run
meteor add cultofcoders:mocha
meteor test --driver-package cultofcoders:mocha
you'll get the nice output.
As a curiousity, I found that the version of cultofcoders:mocha I got (meteor list | grep mocha) was 2.4.6, a version that the github repo does not have...
The screenshot, you reference to, is made using practicalmeteor:mocha, but meteortesting:mocha is not (as the other answer claims) a fork of it but a separately developed package, aiming for the same goal, which is running of tests in Meteor.
The usage of the packages is very different and practicalmeteor:mocha might look a bit trickier to set up and this list only applies to it's version 1.0.1 and might change later.
But I have to admit that the documentation needs a refresh ... Anyways, here are some helpful tipps which I'll include in the documentation soon.
If you just want to get started, run this:
meteor add meteortesting:mocha
npm i --save-dev puppeteer#^1.5.0
TEST_BROWSER_DRIVER=puppeteer meteor test --driver-package meteortesting:mocha --raw-logs --once
Do you want to exit after the tests are completed or re-run them after file-change?
Usually, Meteor will restart your application when it exits (a normal exit or a crash), which includes the test-runner.
In case you want to use it in one of your CI or you just want to run the tests once, add --once to the meteor-command, otherwise set TEST_WATCH=1 before running this script. If you don't set the env variable, and don't define --once, Meteor will print these lines and restart the tests once they're finished:
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
As of now I haven't found a way to check if the flag --once is set, which would omit the env variable. The flexibility here to choose between CI and continuous testing is very useful.
Maybe you're currently working on a feature and want to run the tests as you work. If you have set TEST_WATCH=1 and are not using --once, Meteor will restart the tests once it registers that a file was changed. You can even limit the test collection using MOCHA_GREP.
Where and how do you want to see the results?
You currently have to choose between seeing all the test-results on the command-line or to show the server-tests in the commandline and the client-tests in the browser. Currently practicalmeteor:mocha does not support showing the result of the server- and client-tests in the browser, as your screenshot shows.
Please take a look at the package documentation for further details:
You should disable the Meteor timestamp to make it look better.
Tests might look quite gambled because of the timestamp added to every line. To avoid this, add --raw-logs to your command.
I hope this answers most of your question. I know that the documentation needs some improvements and would welcome if someone would take the time to take it into a more logical order for people who "just want to get started".
I maintain a complex Angular (1.5.x) application that is being E2E tested using Protractor (2.5.x). I am experiencing a problem with this approach, which presents primarily in the way the tests seem flaky. Tests that worked perfectly well in one pull request fail in another. This concerns simple locators, such as by.linkTest(...). I debugged the failing tests and the app is on the correct page, the links are present and accessible.
Has anyone else experienced these consistency problems? Knows of a cause or workaround?
Just Say No to More End-to-End Tests!
That said, here are the few things you can do to tackle our mutual merciless "flakiness" enemy:
update to the latest Protractor (currently 4.0.0) which also brings latest selenium and chromedriver with it
turn off Angular animations
use dragons browser.wait() with a set of built-in or custom Expected Conditions. This is probably by far the most reliable way to approach the problem. Unfortunately, this is use-case and problem specific, you would need to modify your actual tests in the problematic places. For example, if you need to click an element, wait for it to be clickable:
var EC = protractor.ExpectedConditions;
var elm = $("#myid");
browser.wait(EC.elementToBeClickable(elm), 5000);
elm.click();
maximize the browser window (to avoid random element not visible or not clickable errors). Put this to onPrepare():
browser.driver.manage().window().maximize();
increase the Protractor and Jasmine timeouts
slow Protractor down by tweaking the Control Flow (not sure if it works for 4.0.0, please test)
manually call browser.waitForAngular(); in problematic places. I am not sure why this helps but I've seen reports where it definitely helped to fix a flaky test.
use the jasmine done() callback in your specs. This may help to, for example, not to start the it() block until done is called in beforeEach()
return a promise from the onPrepare() function. This usually helps to make sure things are prepared for the test run
use protractor-flake package that would automatically re-run failed tests. More like a quick workaround to the problem
There are also other problem-specific "tricks" like slow typing into the text box, clicking via JavaScript etc.
Yes, I think all of us experienced such flakiness issue.
Actually, the flakiness is quite common issue with any browser automation tool. However, this is supposed to be less in case of Protractor as Protractor has built-in wait consideration which performs actions only after loading the dom properly. But, in few cases you might have to use some explicit waits if you see intermittent failures.
I prefer to use few intelligent wait methods like:
function waitForElementToClickable(locator) {
var domElement = element(by.css(locator)),
isClickable = protractor.ExpectedConditions.elementToBeClickable(domElement);
return browser.wait(isClickable, 2000)
.then(function () {
return domElement;
});
}
Where 2000 ms is used as timeout, you can make it configurable using a variable.Sometimes I also go with browser.sleep() when none of my intelligent wait works.
It's been my experience that some methods (eg. sendKeys()) do not always fire at the expected time, within the controlFlow() queue, and will cause tests to be flakey. I work around this by specifically adding them to the controlFlow(). Eg:
this.enterText = function(input, text) {
return browser.controlFlow().execute(function() {
input.sendKeys(text);
});
};
A workaround that my team has been using is to re-run only failed tests using the plugin protractor-errors. Using this tool, we can identify real failures versus flakey tests within 2-3 runs. To add the plugin, just add a require statement to the bottom of the Protractor config's onPrepare function:
exports.config = {
...
onPrepare: function() {
require('protractor-errors');
}
}
You will need to pass these additional parameters when to run your tests with the plugin:
protractor config.js --params.errorsPath 'jasmineReports' --params.currentTime (timestamp) --params.errorRun (true or false)
There is also a cli tool that will handle generating the currentTime if you don't have an easy way to pass in a timestamp.
In Intern framework, when I specify multiple tests using functionalSuites config field and run tests using BrowserStack tunnel, only one session is created in BrowserStack (everything is treated as a single test). As a result we have a few issues:
It's practically impossible to use BrowserStack for debugging for a large amount of tests. There is no navigation, you have to scroll over a huge log.
Tests are not fully isolated. For example, localStorage is shared between all tests.
The question: how to force Intern framework to create a new session for every single test? It seems like it's impossible at the moment after looking at the codebase.
PS: I would assume that this behaviour is applied to other tunnels as well.
Use the following Gist
intern-parallel.js
Just put this file alongside intern.js and replace "intern!object" in your functional test files with "tests/intern-parallel"
Example functional test
define([
//'intern!object',
'tests/intern-parallel',
'intern/chai!assert',
'require'
], function (registerSuite, assert, require, registry) {
registerSuite({
name: 'automate first test',
'google search': function () {
return this.remote
.get(require.toUrl('https://www.google.com'))
.findByName("q")
.type("Browserstack\n")
.end()
.sleep(5000)
.takeScreenshot();
}
});
});
Sorry for a newbie question, I am still new to mocha. I have an existing app that I am tasked to create a mocha test case. This app uses passport-auth0 and passport for user login. How do write mocha test such that I can login as a dummy user to test restricted functions?
Since Passport uses strategies that can be swapped in, one option is to put a mock in place of the actual auth0 strategy when running tests. That would look something like:
function MockStrategy() {
}
MockStrategy.prototype.authenticate = function(req) {
self.success({ id: 1, username: 'joe' );
}
// In test setup, mock out the strategy with one that returns
// dummy data.
passport.use('auth0', new MockStrategy());
Now you can write test cases where req.user will always be the joe as supplied by the mock strategy. You can extend that to cover authentication failures in a similar way.
That is my preferred approach, as it is least intrusive. Depending on how the application code is structured, there may be other dependencies that get required and need to be mocked out. For those situations, I've found proxyquire to be useful.