Loading dojo patch files when running intern test - dojo

With dojo, patch files can be created and loaded via 'patches/patch!' where patch.js contains load function with all patches.
Content patch.js
define(['require'], function(require) {
return {
load: function load(id, parentRequire, loaderCallback) {
require([< patch files >], loaderCallback);
}
};
});
When running intern suites I want that those patches are also loaded first before running my suites.
Is there a way to ensure that my patch files are loaded before running my test suites.
Regards Marco

When you define your tests suite, you have a before function where you can call your load patch function
define(["intern!tdd", "patch"], function (tdd, patch) {
tdd.describe("Tests suite", function () {
tdd.before(function () {
// executes before suite starts
patch.load();
});
});
});

Related

Running Knex Migrations Between Mocha Tests

I was using Mocha to test my Nodejs app with a test database. In order to reset the DB before each test I had the following code, which worked perfectly:
process.env.NODE_ENV = 'test';
var knex = require('../db/knex');
describe("Add Item", function() {
beforeEach(function(done) {
knex.migrate.rollback()
.then(function() {
knex.migrate.latest()
.then(function() {
return knex.seed.run()
.then(function() {
done();
});
});
});
});
...
I've since switched from mocha to mocha-casperjs for my integration tests, and now the knex migrations won't run. I'm given this error message with the exact same before each hook:
undefined is not an object (evaluating 'knex.migrate.rollback')
phantomjs://platform/new-item.js:12:17
value#phantomjs://platform/mocha-casperjs.js:114:20
callFnAsync#phantomjs://platform/mocha.js:4314:12
run#phantomjs://platform/mocha.js:4266:18
next#phantomjs://platform/mocha.js:4630:13
phantomjs://platform/mocha.js:4652:9
timeslice#phantomjs://platform/mocha.js:12620:27
I'm pretty sure that migration functionality is not included in webpack build. If you go to http://knexjs.org/ open up debug console and checkout different clients e.g. mysql.migrate you see that there are no functions declared at all.
Actually you can check it out with node too if you explicitly load webpack build instead of node lib.
// load webpack build instead of node build...
let knex = require('knex/build/knex')({client : 'pg'});
console.log(knex.migrate);
// outputs: {}
So... the question is why are you trying to run your tests on PhantomJS browser instead of node.js?

How to integrate sauce-connect-launcher on webdriver.io wdio.conf.js file

I read this http://webdriver.io/guide/testrunner/cloudservices.html, but not find how launch the tunnel inside the wdio.conf.js and how integrate.
Update:
Now webdriver.io latest version have documentation to integrate the cloud testing services.
See to more:
http://webdriver.io/guide/testrunner/cloudservices.html
Fomerly to integrate the sauce-connect-launcher you need the wdio.conf.js file, and inside the file configure the sauce-connect-launcher to launch the tunnel, example wdio.conf.js file:
var sauceConnectLauncher = require('sauce-connect-launcher');
global.sauceConnectProcess = null;
exports.config = {
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called. Notice that, if you are calling `wdio` from an
// NPM script (see https://docs.npmjs.com/cli/run-script) then the current working
// directory is where your package.json resides, so `wdio` will be called from there.
//
user: 'the_pianist2',
key: '27dde83a-1cf7-450d-8c88-857c4d3cde43',
specs: [
//command line
//'spec/**/*.js' wdio wdio.conf.js
//grunt
'./www-test/e2e/spec/*.js'
],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilties at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude option in
// order to group specific specs to a specific capability.
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://docs.saucelabs.com/reference/platforms-configurator
//
capabilities: [{
browserName: 'chrome'
},
{
browserName: 'firefox'
}
],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity.
//logLevel: 'result',
//
// Enables colors for log output.
coloredLogs: true,
//
// Saves a screenshot to a given path if a command fails.
screenshotPath: './errorShots/',
//
// Set a base URL in order to shorten url command calls. If your url parameter starts
// with "/", the base url gets prepended.
//desarrollo
baseUrl: 'http://localhost:3001',
//produccion
//baseUrl: 'http://www.example.com',
//
// Default timeout for all waitForXXX commands.
waitforTimeout: 20000,
//
// Initialize the browser instance with a WebdriverIO plugin. The object should have the
// plugin name as key and the desired plugin options as property. Make sure you have
// the plugin installed before running any tests. The following plugins are currently
// available:
// WebdriverCSS: https://github.com/webdriverio/webdrivercss
// WebdriverRTC: https://github.com/webdriverio/webdriverrtc
// Browserevent: https://github.com/webdriverio/browserevent
// plugins: {
// webdrivercss: {
// screenshotRoot: 'my-shots',
// failedComparisonsRoot: 'diffs',
// misMatchTolerance: 0.05,
// screenWidth: [320,480,640,1024]
// },
// webdriverrtc: {},
// browserevent: {}
// },
//
// Framework you want to run your specs with.
// The following are supported: mocha, jasmine and cucumber
// see also: http://webdriver.io/guide/testrunner/frameworks.html
//
// Make sure you have the node package for the specific framework installed before running
// any tests. If not please install the following package:
// Mocha: `$ npm install mocha`
// Jasmine: `$ npm install jasmine`
// Cucumber: `$ npm install cucumber`
framework: 'jasmine',
//
// Test reporter for stdout.
// The following are supported: dot (default), spec and xunit
// see also: http://webdriver.io/guide/testrunner/reporters.html
reporter: 'spec',
reporterOptions: {
//
// If you are using the "xunit" reporter you should define the directory where
// WebdriverIO should save all unit reports.
outputDir: './'
},
//
// Options to be passed to Jasmine.
jasmineNodeOpts: {
//
// Jasmine default timeout
defaultTimeoutInterval: 20000,
//
// The Jasmine framework allows it to intercept each assertion in order to log the state of the application
// or website depending on the result. For example it is pretty handy to take a screenshot everytime
// an assertion fails.
expectationResultHandler: function(passed, assertion) {
}
},
//
// =====
// Hooks
// =====
// Run functions before or after the test. If one of them returns with a promise, WebdriverIO
// will wait until that promise got resolved to continue.
//
// Gets executed before all workers get launched.
onPrepare: function() {
return new Promise(function(resolve, reject) {
sauceConnectLauncher({
username: 'the_pianist2',
accessKey: '27dde83a-1cf7-450d-8c88-857c4d3cde43',
}, function (err, sauceConnectProcess) {
if (err) {
return reject(err);
}
console.log('conexion realizada');
global.sauceConnectProcess = sauceConnectProcess
resolve();
});
});
},
//
// Gets executed before test execution begins. At this point you will have access to all global
// variables like `browser`. It is the perfect place to define custom commands.
before: function() {
// do something
},
//
// Gets executed after all tests are done. You still have access to all global variables from
// the test.
after: function(failures, pid) {
// do something
},
//
// Gets executed after all workers got shut down and the process is about to exit. It is not
// possible to defer the end of the process using a promise.
onComplete: function() {
console.log('Test completado');
global.sauceConnectProcess.close(function () {
console.log("Closed Sauce Connect process");
return true;
});
}};
In the hook 'onPrepare' the script to connect tunnel is launch and the inside Promise is very important because wait the callback of connection and run the next steps after finish onPrepare functions:
https://github.com/webdriverio/webdriverio/issues/1062
After that the test is launching on the server saucelabs.

Jasmine test makes no pass/fail report under webdriver.io

Running the following jasmine test under webdriver.io like this: node path/to/test/script.js, the test executes (web browser is pulled up, target page visited), and thanks to the last line, the jasmine 'it' functions (below) do execute (without the last line, they don't, although the 'describe' function still runs).
But jasmine doesn't provide any kind of report result for the 'it' tests and the 'expect' assertions; there's nothing on the console from jasmine. There's no 'pass/fail' result, and so forth.
How to get jasmine to make a report, and esp. one that is readable by Jenkins?
The problem test script:
var webdriverjs = require('foo-bar/node_modules/webdriverio');
var jasmine = require('foo-bar/node_modules/jasmine-node');
var options = {
port: 4445,
desiredCapabilities: {
browserName: process.argv[2] || 'phantomjs'
}
};
describe('my webdriverjs tests', function () {
var client;
jasmine.DEFAULT_TIMEOUT_INTERVAL = 9999999;
beforeEach(function() {
client = webdriverjs.remote(options);
client.init();
});
it('shows the correct title', function (done) {
client
.url('http://localhost:4444').getTitle(function(err, title) {
expect(title).toBe('foo bar');
}).call( done );
});
afterEach(function(done) {
client.end(done);
});
});
jasmine.getEnv().execute();
Note: Cross-posted here: https://groups.google.com/forum/#!topic/webdriverio/-EOrQ003B9I
I ran into some of the same challenges when I was looking into this. The big issue is that this test needs to be executed as a jasmine test, not a webdriver test.
decribe('my webdriverio tests with jasmine', function(){
var client;
beforeEach(function(){
client = require('path/to/webdriverio').remote({
desiredCapabilities: {browserName:'safari'}
}).init.url('https://www.stackoverflow');
}, 5000);
afterEach(function(done){
client.end(done);
}, 5000);
it('runs a very simple test',function(done){
client.getTitle(function(err,result){
expect(result).toBe('Stack Overflow');
}).call(done);
}, 5000);
});
Now to run this test, you would just run a typical jasmine-node command from your terminal.
It comes down to the naming convention you are using. First, you need to remove the last line: jasmine.getEnv().execute(); then run the jasmine-node command with the --matchall flag:
jasmine-node --matchall path/to/test/script.js
If you named your file script_spec.js, then you could run it without the --matchall flag.
This is also assuming you have jasmine-node installed globally. If you want to use the local node_modules dependency, then you need to run this command:
./node_modules/jasmine-node/bin/jasmine-node --matchall path/to/test/script.js
When you are using jasmine-node module you should run your spec with
node_modules/jasmine-node/bin/jasmine-node $TEST_DIRECTORY
And your test should end with *spec.js, *spec.coffee or *spec.litcoffee as docs said.
And jasmine.getEnv().execute(); and var jasmine = require('foo-bar/node_modules/jasmine-node'); should not be in your script.

intern dojo loader issue

I'm trying to setup intern for my project, a Dojo/JS project, and the server is not Node... I get a loader issue, which seems to be due to dojo.has using Dojo loader... The require wrapper suggested in here did not work for me.
I get the error below:
> node node_modules/intern/client.js config=tests/intern
Defaulting to "console" reporter
dojo/text plugin failed to load because loader does not support getText
TypeError: undefined is not a function
at Object.load (lib/dojo/dojo/text.js:199:6)
Below are my intern configuration and the test file:
/tests/intern.js: (config file)
loader: {
packages: [ { name: 'visitorsPortal', location: 'portals/visitor' },
{ name: 'dojo', location: 'lib/dojo/dojo'},
{ name: 'dijit', location: 'lib/dojo/dijit'},
{ name: 'portalLib', location: 'portals/lib'} ]
},
suites: [ 'tests/uitests' ],
tests/uitests:
define([
'intern!tdd',
'intern/chai!assert',
'portals/visitor/views/MyModule'
], function (test, assert, MyModule) {
// empty for now...
});
This has nothing to do with dojo/has and everything to do with the dojo/text plugin requiring functionality that only exists within the Dojo 1 loader when being used server-side.
If you are attempting to test software that relies on any non-standard AMD loader functionality, you will need to use the non-standard loader, or override those modules with alternative copies that are compatible with other loaders.
In this specific case, your easiest path forward is to use the geezer edition of Intern, since it includes the old Dojo loader which contains these non-standard extensions. The best path forward is to remap the dojo/text module to another compatible module that does not need anything special in the loader in order to retrieve the data:
// in intern.js
// ...
loader: {
map: {
'*': {
'dojo/text': 'my/text'
}
}
},
// ...
I struggled with the same problem yesterday, but thanks to C Snover's answer here and the question you're linking to, I did make some progress.
I added the map directive to the intern.js loader config (as C Snover suggests).
// in intern.js
// ...
loader: {
map: {
'*': {
'dojo/text': 'my/text'
}
}
},
// ...
For the my/text module, I just copied dojo/text and added an else if clause to the part that resolves the getText function:
if(has("host-browser")){
getText= function(url, sync, load){
request(url, {sync:!!sync}).then(load);
};
} else if(has("host-node")){
// This was my addition...
getText = function(url, sync, load) {
require(["dojo/node!fs"], function(fs) {
fs.readFile(url, 'utf-8', function(err, data) {
if(err) console.error("Failed to read file", url);
load(data);
});
});
};
} else {
// Path for node.js and rhino, to load from local file system.
// TODO: use node.js native methods rather than depending on a require.getText() method to exist.
if(require.getText){
getText= require.getText;
}else{
console.error("dojo/text plugin failed to load because loader does not support getText");
}
}
However, even though the tests were running in intern via node, the host-node value wasn't being set. That was fixed by setting dojo-has-api in my intern.js config:
define(["intern/node_modules/dojo/has"], function(has) {
has.add("dojo-has-api", true);
return { /* my intern config as normal */ };
});
I'll admit I don't understand 100% what I've done here, and with the copy/pasting it's not exactly pretty, but it serves as a temporary fix for my problem at least.
Note: This did introduce another set of issues though: Since Dojo now knows that it's running in node, dojo/request no longer tries to use XHR. I was using sinon.js to mock my xhr requests, so this had to be changed.

How to set jasmine for karma e2e for testing angular app?

I try to create e2e tests with karma and jasmine with yeoman. In my karma-e2e.conf.js I add jasmine:
files = [
JASMINE,
JASMINE_ADAPTER,
ANGULAR_SCENARIO,
ANGULAR_SCENARIO_ADAPTER,
'test/e2e/**/*.js'
];
A need async testing so I need to use runs, waits, waitsFor (https://github.com/pivotal/jasmine/wiki/Asynchronous-specs)
But if I try to use it:
it('test', function () {
runs(function () {
...
});
});
Scenatio test runner returns this:
TypeError: Cannot call method 'runs' of null
at runs (http://localhost:8080/adapter/lib/jasmine.js:562:32)
at Object.<anonymous> (http://localhost:8080/base/test/e2e/eduUser.js:42:3)
at Object.angular.scenario.SpecRunner.run (http://localhost:8080/adapter/lib/angular-scenario.js:27057:15)
at Object.run (http://localhost:8080/adapter/lib/angular-scenario.js:10169:18)
I don't know where the problem is. Can you help me please?
Angular e2e tests with Karma don't and can't use the JASMINE adapter. Instead you have the ANGULAR_SCENARIO_ADAPTER which has a similar feel to writing Jasmine tests.
All commands in the adapter's API are asynchronous anyway. For example element('#nav-items').count() doesn't return a number, it returns a Future object. Future objects are placed in a queue and executed asynchronously as the runner progresses. To quote the API docs:
expect(future).{matcher}:
[...] All API statements return a future object, which get a value assigned after they are executed.
If you need to run your own asynchronous test code, you can extend the adapter's DSL, this is easier than it might sound. The idea is that you return your own Future which can be evaluated by a matcher such as toBe(). There are some examples on how to do this in the e2e-tests.js Gist from Vojta. Just remember to call done(null, myRetrunValue); when your test code is successful (myRetrunValue is the value evaluated by your matcher). Or done('Your own error message'); if you want the test to fail.
UPDATE: In response to question below. To simulate a login, first add a function called login to the dsl:
angular.scenario.dsl('login', function() {
return function(selector) {
// #param {DOMWindow} appWindow The window object of the iframe (the application)
// #param {jQuery} $document jQuery wrapped document of the application
// #param {function(error, value)} done Callback that should be called when done
// (will basically call the next item in the queuue)
return this.addFutureAction('Logging in', function(appWindow, $document, done) {
// You can do normal jQuery/jqLite stuff here on $document, just call done() when your asynchronous tasks have completed
// Create some kind of listener to handle when your login is complete
$document.one('loginComplete', function(e){
done(null, true);
}).one('loginError', function(e){
done('Login error', false);
});
// Simulate the button click
var loginButton = $document.find(selector || 'button.login');
loginButton.click();
})
};
});
And then call:
beforeEach( function()
{
expect( login('button.login') ).toBeTruthy();
});