In Intern framework, when I specify multiple tests using functionalSuites config field and run tests using BrowserStack tunnel, only one session is created in BrowserStack (everything is treated as a single test). As a result we have a few issues:
It's practically impossible to use BrowserStack for debugging for a large amount of tests. There is no navigation, you have to scroll over a huge log.
Tests are not fully isolated. For example, localStorage is shared between all tests.
The question: how to force Intern framework to create a new session for every single test? It seems like it's impossible at the moment after looking at the codebase.
PS: I would assume that this behaviour is applied to other tunnels as well.
Use the following Gist
intern-parallel.js
Just put this file alongside intern.js and replace "intern!object" in your functional test files with "tests/intern-parallel"
Example functional test
define([
//'intern!object',
'tests/intern-parallel',
'intern/chai!assert',
'require'
], function (registerSuite, assert, require, registry) {
registerSuite({
name: 'automate first test',
'google search': function () {
return this.remote
.get(require.toUrl('https://www.google.com'))
.findByName("q")
.type("Browserstack\n")
.end()
.sleep(5000)
.takeScreenshot();
}
});
});
Related
I am struggling to pass launch arguments to detox. For example if I want to pass a few different users as launch args. My init file looks like:
beforeAll(async () => {
await device.launchApp({
newInstance: true,
permissions: {notifications: 'YES'},
launchArgs: {
users: {
user1: { email: '123#abc.com', pass: '123456' },
user2: { email: 'abc#123.com', pass: '654321' },
}
}
});
});
However in my test file
await device.appLaunchArgs.get();
returns an empty object. Any ideas of what I am doing wrong? Am i misunderstanding what launchArgs are for?
The purpose of the launchArgs is to send parameters to the app being tested because you can't communicate with the app process otherwise. launchArgs enable you to configure specific app behavior, either (1) to pass dynamic parameters based on your test environment (e.g. ports of another process the app needs to connect to), or (2) to simulate a condition for a specific test case (e.g. you write two e2e tests, one that has some feature flag turned on and another one with it being off).
However in my test file
You don't access the values in a test file. Since the test file runs in the same node process as the beforeEach, there's no need to pass args. In fact, you can launch the app (with the appropriate args) directly in your test case, which is especially useful for the case (2) above.
To read the launchArgs in the app, you can create an .e2e.js mock file, and then use react-native-launch-arguments to retrieve the configured values. The rest is up to you but the general idea is to use the launch args in your app to change some part of business logic or configuration you want to test.
I am testing a large project with long scenarios (some with more than 100 interactions with webpage). I would like to break them down into shorter steps that run in sequence (like in Mocha) but I don't know how to do that.
Example: In a single test, I would like to run
fixture('test1')
test('test1', async (t) => {
...login
...createSubAccount
...modifySubAccount
...activateSubAccount
})
where each of the steps would show in console and in report. Right now, the only thing I know how to do is to put each step into its own test() context, but that means that if e.g. createSubAccount fails, modifySubAccount and activateSubAccount will still run (even though the workflow already failed). Also, there is the unhappy part that each test() clears the browser (but I can deal with that).
In short: How can I split the tests in a way that if a single substep of fixture fails, the whole fixture fails immediately? Or similar thing, but for test()?
Also, I don't want the whole pipeline to end on the first test failure, as would happen with --stopOnFirstFail flag - I want to run all the tests, to find which are failing.
test() is the smallest unit. The idea is it's an independent piece of testing code, e.i. a bunch of test steps. This doesn't change no matter what tool you use (TestCafe, Playwright, Puppeteer, Cypress, mocha, Jest, ...).
And so:
Right now, the only thing I know how to do is to put each step into its own test() context, but that means that if e.g. createSubAccount fails, modifySubAccount and activateSubAccount will still run (even though the workflow already failed).
seems like breaking one of the main principles of tests, that is they are independent. Don't split test steps that belong together between different tests.
If the only drawback now is the length of your test, why don't you do it like you hinted at in the example:
test('test1', async (t) => {
login();
createSubAccount();
modifySubAccount();
activateSubAccount();
});
you can create functions for login, createAccount etc. and then use only such function in your tests, which would make them as short as shown here. You can also easily create various scenarious:
test('activate account without modification', async (t) => {
login();
createSubAccount();
activateSubAccount();
});
test('create account', async (t) => {
login();
createSubAccount();
});
test('create account without login', async (t) => {
createSubAccount();
});
// and so on
It doesn't even look that long.
TestCafe does not support the functionality you require at the moment. The only solution I could think of is, as you proposed, to implement your test as a fixture with steps as tests, use disablePageReloads feature (NOTE: it is experimental), track the number of passed tests manually, and check it at the beginning of each test. It is a bit tedious, but it should work as you need.
Another solution that has not been implemented yet and the easiest way to split the long test into steps is to simply divide it into functions. The only issue that may arise is related to reporting. Even if you implement a custom reporter, there is no possibility to pass information about the steps into it (you can vote for the corresponding feature request).
Also, I would like to draw your attention to Page Model pattern. This can shrink your tests and make them more readable.
Please open a new feature request with a comprehensive description if you have a better idea of how this should be done.
I maintain a complex Angular (1.5.x) application that is being E2E tested using Protractor (2.5.x). I am experiencing a problem with this approach, which presents primarily in the way the tests seem flaky. Tests that worked perfectly well in one pull request fail in another. This concerns simple locators, such as by.linkTest(...). I debugged the failing tests and the app is on the correct page, the links are present and accessible.
Has anyone else experienced these consistency problems? Knows of a cause or workaround?
Just Say No to More End-to-End Tests!
That said, here are the few things you can do to tackle our mutual merciless "flakiness" enemy:
update to the latest Protractor (currently 4.0.0) which also brings latest selenium and chromedriver with it
turn off Angular animations
use dragons browser.wait() with a set of built-in or custom Expected Conditions. This is probably by far the most reliable way to approach the problem. Unfortunately, this is use-case and problem specific, you would need to modify your actual tests in the problematic places. For example, if you need to click an element, wait for it to be clickable:
var EC = protractor.ExpectedConditions;
var elm = $("#myid");
browser.wait(EC.elementToBeClickable(elm), 5000);
elm.click();
maximize the browser window (to avoid random element not visible or not clickable errors). Put this to onPrepare():
browser.driver.manage().window().maximize();
increase the Protractor and Jasmine timeouts
slow Protractor down by tweaking the Control Flow (not sure if it works for 4.0.0, please test)
manually call browser.waitForAngular(); in problematic places. I am not sure why this helps but I've seen reports where it definitely helped to fix a flaky test.
use the jasmine done() callback in your specs. This may help to, for example, not to start the it() block until done is called in beforeEach()
return a promise from the onPrepare() function. This usually helps to make sure things are prepared for the test run
use protractor-flake package that would automatically re-run failed tests. More like a quick workaround to the problem
There are also other problem-specific "tricks" like slow typing into the text box, clicking via JavaScript etc.
Yes, I think all of us experienced such flakiness issue.
Actually, the flakiness is quite common issue with any browser automation tool. However, this is supposed to be less in case of Protractor as Protractor has built-in wait consideration which performs actions only after loading the dom properly. But, in few cases you might have to use some explicit waits if you see intermittent failures.
I prefer to use few intelligent wait methods like:
function waitForElementToClickable(locator) {
var domElement = element(by.css(locator)),
isClickable = protractor.ExpectedConditions.elementToBeClickable(domElement);
return browser.wait(isClickable, 2000)
.then(function () {
return domElement;
});
}
Where 2000 ms is used as timeout, you can make it configurable using a variable.Sometimes I also go with browser.sleep() when none of my intelligent wait works.
It's been my experience that some methods (eg. sendKeys()) do not always fire at the expected time, within the controlFlow() queue, and will cause tests to be flakey. I work around this by specifically adding them to the controlFlow(). Eg:
this.enterText = function(input, text) {
return browser.controlFlow().execute(function() {
input.sendKeys(text);
});
};
A workaround that my team has been using is to re-run only failed tests using the plugin protractor-errors. Using this tool, we can identify real failures versus flakey tests within 2-3 runs. To add the plugin, just add a require statement to the bottom of the Protractor config's onPrepare function:
exports.config = {
...
onPrepare: function() {
require('protractor-errors');
}
}
You will need to pass these additional parameters when to run your tests with the plugin:
protractor config.js --params.errorsPath 'jasmineReports' --params.currentTime (timestamp) --params.errorRun (true or false)
There is also a cli tool that will handle generating the currentTime if you don't have an easy way to pass in a timestamp.
I have an application made in php for which am using selenium for unit testing using phpUnit. The problem is that I have to set the environment before I can go for tests. For eg. I have to set session variables, login and fetch data from remote server. All this takes a lot of time and it is not feasible to re-set this in every test function.
I am looking for a method so that I can use the same browser session for running all the tests in it. I tried looking for resources online, but couldn't find any good sources for this. The code I have written is
protected function setUp()
{
parent::setUp();
$this->setBrowserUrl("http://localhost/devel/");
}
public function start()
{
parent::start();
$this->open("");
//Setting up the environment here
}
public function testFunction()
{
//A test function
}
public function testFunction2()
{
//Another test function
}
But this is opening browser instance for both the functions. Is there any work around for this? Or is there any command line parameter while launching selenium server for this?
"[I am] using selenium for unit testing using phpUnit"
No, you're not. You're using PHPUnit with selenium for functional testing. :-)
But since it's probably not in your best interest to re-invent that wheel, you want Mink: http://mink.behat.org/
It wraps around Guzzle and lets you do session-based acceptance testing using a bunch of different drivers. It has Goutte for a headless browser, and can work with Selenium and Sahi and a bunch of others.
Also of note, depending on your needs, is Behat: http://behat.org/
It lets you write client-readable test documents that can be turned into Mink-based acceptance tests.
HTH.
Question already answered.
The unaccepted answer did the job for me.
#see How do I run a PHPUnit Selenium test without having a new browser window run for each function?
I am currently researching ways to integrate a testsuite for an application based on ember.js into travis-ci. So first off, we're not on the open-source service, we use it for private repositories, etc..
I looked at how several open-source projects run their ember.js test suite and it looks like they set up a server with their project which probably gets updated whenever someone pushes to the repository. Then PhantomJS is used to run the tests on that server (and actually not on travis-ci itself).
The problem I have with this approach is that this adds another step (and ultimately complexity): I have to update and maintain a server with the latest code so I can use PhantomJS to run the test suite.
Another drawback is that I don't see how it would enable us to test PRs (pull-requests) either. The server would have to be updated with code from the PR. Testing PRs before they are merge is one of the great things about travis-ci.
I couldn't find much/anything about running ember.js tests only through the CLI – I am hoping someone tackled this issue before me.
I can't speak to your questions about travis-ci ... but I can offer some thoughts about unit testing ember.js code with jasmine.
Before I started using ember.js I was unit testing with jasmine and a simple node.js module called jasmine-node. This allowed me to quickly run a suite of jasmine unit tests from the command line without having to open a browser or hack around with "js-test runner" / etc
That worked great when I had jasmine, jquery and simple javascript modules I used to keep my javascript code human readable. But the moment I needed to use ember/handlebars/etc the jasmine-node module fell down because it expects you have everything available on both global and window. But because ember is just a browser library not everything was on "global"
I started looking at PhantomJS and like yourself couldn't see myself adding the complexity. So instead of hacking around this I decided to take a weekend and write what was missing from the jasmine test runner space. I wanted the same power of jasmine-node (meaning all I would need on my CI box was a recent version of node.js and a simple npm module to run the tests)
I wrote a npm module called jasmine-phantom-node and at the core it's using node.js to run phantomJS => that in turn fires up a regular jasmine html runner and scrapes the page for test results using a very basic express web app.
I spent the time to put 2 different examples in the github project so others could see how it works quickly. It's opinionated so you will need an html file in your project root that will be used by the plugin to execute your tests. It also requires jasmine, and jasmine-html along with a recent jQuery.
It solved this issue for me personally and now I can write tests against ember using simple jasmine and run it from the cmd line without a browser.
Here is a sample jasmine unit test that I wrote against an ember view recently while spiking around with this test runner. Here is a link to the full ember / django project if you want to see how the view under test is used in the app.
require('static/script/vendor/filtersortpage.js');
require('static/script/app/person.js');
describe ("PersonApp.PersonView Tests", function(){
var sut, router, controller;
beforeEach(function(){
sut = PersonApp.PersonView.create();
router = new Object({send:function(){}});
controller = PersonApp.PersonController.create({});
controller.set("target", router);
sut.set("controller", controller);
});
it ("does not invoke send on router when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).not.toHaveBeenCalledWith('addPerson', jasmine.any(String));
});
it ("invokes send on router with username when exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).toHaveBeenCalledWith('addPerson', 'foo');
});
it ("does not invoke set context when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).not.toHaveBeenCalledWith('username', jasmine.any(String));
});
it ("invokes set context to empty string when username exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).toHaveBeenCalledWith('username', '');
});
});
Here is the production ember view that I'm unit testing above
PersonApp.PersonView = Ember.View.extend({
templateName: 'person',
addPerson: function(event) {
var username = event.context.username;
if (username) {
this.get('controller.target').send('addPerson', username);
event.context.set('username', '');
}
}
});