Making protractor tests run on different Urls parallely (same Browser) - selenium

I want to run tests in parallel. Tried to use shardTestFiles:true with maxInstance:2. But what I need is being able to specify different URLs for these test instances to hit, as my application is not completely stateless.
I tried to give multiple chrome capabilities to achieve this, and identifying by name. like this:
multiCapabilities: [
{
'browserName': 'chrome',
name: 'browser1'
},
{
'browserName': 'chrome',
name: 'browser2'
}
]
But protractor does not shard the tests between capabilities.
Thanks for the help.

That's a good question and its definitely possible. Use browser.getProcessedConfig to fetch the current browser's name at run-time and modify baseUrl accordingly. Refer browser.getProcessedConfig API doc
Modify your on-prepare function as below
onPrepare: function() {
browser.getProcessedConfig().then(function(config){
switch(config.capabilities.name) {
case 'browser1':
browser.baseUrl = 'https://angularjs.org/'
break;
case 'browser2':
browser.baseUrl = 'http://www.protractortest.org/'
break;
default:
browser.baseUrl = 'https://builtwith.angularjs.org/'
break;
}
})
},

Related

Interaction between restartBrowserBetweenTests with onPrepare()

I want each of my tests to run on clean browser (Firefox) so i use restartBrowserBetweenTests:true option. Because i use non-Angular demo app, in onPrepare() function i use browser.waitForAngularEnabled(false). It's works fine, when i run a single spec, but when i run 2 specs, i have error.
Angular could not be found on the page. If this is not an Angular application, you may need to turn off waiting for Angular.
How can i solve this? And in addition, how onPrepare works in this case - every time when browser starts or one time before all specs?
Here is my conf.js
const screenshotReporter = require('./screenshotCustomReporter')
exports.config = {
capabilities: {
browserName: 'firefox'
},
restartBrowserBetweenTests: true,
framework: 'jasmine',
directConnect: true,
baseUrl: URL,
specs: ['path/**/*Spec.js'],
// Options to be passed to Jasmine.
jasmineNodeOpts: {
defaultTimeoutInterval: 30000,
includeStackTrace: true
},
onPrepare: () => {
require("#babel/register");
jasmine.getEnv().addReporter(screenshotReporter)
browser.waitForAngularEnabled(false)
}
}
You can recreate this issue using the following simple project:
conf.js
exports.config = {
framework: 'jasmine',
specs: ['./app.1.js', './app.2.js'],
seleniumAddress: 'http://localhost:4444/wd/hub',
restartBrowserBetweenTests:true,
onPrepare:function(){
browser.waitForAngularEnabled(false);
},
}
app.1.js
describe('second test', () => {
it('should check is displayed successfully', () => {
browser.driver.get("https://stackoverflow.com");
browser.driver.sleep(5000);
expect(element(by.linkText('Ask Question')).isDisplayed()).toBe(true);
});
});
app.2.js
describe('first test', () => {
it('should check is displayed successfully', () => {
browser.driver.get("https://stackoverflow.com");
browser.driver.sleep(5000);
expect(element(by.linkText('Ask Question')).isDisplayed()).toBe(true);
});
});
OnPrepare is defined for all settings need to be executed for suite. It means it
is always one time operation irrespective of number of spec files.
One concept you need to understand is that whenever the new instance of
firefox browser is launched then WebdriverJs initialize the instance of webdriver.
and global object browser in protractor also gets initialized.
In your case First spec file start firefox browser, OnPrepare function is executed afterwards and
default setting of protractor is overriden by WaitForAngularEnabled.But when you run second spec file,
again firefox browser is launched with fresh instance of webdriver and protractor browser which expect
angular application and in that case test case gets failed.
The solution for this problem is to use before function in spec file
describe('first test', () => {
before(() => {
browser.waitForAngularEnabled(false);
});
it('should check is displayed successfully', () => {
browser.driver.get("https://stackoverflow.com");
browser.driver.sleep(5000);
expect(element(by.linkText('Ask Question')).isDisplayed()).toBe(true);
});
});
Note : If you are using restartBrowserBetweenTests: true then you will have to use beforeEach() function for waitForAngularEnabled because every time fresh instance of webdriver will be created.

Protractor asynchronous parallel testing on docker containers

for few days i struggle with parallel execution of tests using selenium docker.
Following scenario:
Define browsers in multiCapabilities with specs.
Deploy containers with selenium-hub, 2 firefox, 2 chrome nodes.
Run tests
Issues is appearing when, chrome and firefox are running the same spec in parallel.
Depending on the speed of execution, lets say if firefox is first and chrome second. (spec1 is running on both browsers at the same time).
Due dependency spec1 is successful on firefox (as expected) and on chrome it should fail with exception (as expected). Here it goes the interesting part:
firefox test ending, but chrome is hanging (the part where it throw exception) and test fails after the configured jasmine/test timeout, lets say 3 minutes with
"unresolved promise"....
Since i have await on the method, and i have wrapped it in try catch, the exception should go up to the test, where i have also wrap the test methods in try catch, and if there is an exception done.fail() should stop the test.
But it never goes to there... after long time of debugging, the only thing i can see its that exception is thrown and it never goes to the test where i should catch it and fail the test.
Configuration of multicapabilities
{
browserName: 'chrome',
shardTestFiles: true,
maxInstances: 2,
specs: [
'../spec/**/spec1.js'
]
},
{
browserName: 'firefox',
maxInstances: 2,
shardTestFiles: true,
marionette: true,
specs: [
'../spec/**/spec1.js'
]
},
Protractor specific:
SELENIUM_PROMISE_MANAGER: false,
seleniumAddress: 'ip of the selenium hub'
maxSessions: 4
framework: 'jasmine'
... and other custom not related props as loggers, reporters etc.
Test example:
describe('test 1', () => {
it('can done something', async (done) => {
try {
await doSomething();
} catch (e) {
done.fail(e);
}
done();
}, 1000 * 60 * 5);
}
if there is an exception from doSomething(), test should be forced to fail, but it hangs in parallel execution.
Do i miss something and/or can you suggest why it hangs, while executing the same test on different browsers?
If you need some more information please let me know.
Such kind of callbacks does not work with async. If you want to fail test you can do it easier:
describe('test 1', () => {
it('can done something', async () => {
try {
await doSomething();
} catch (e) {
throw new Error(e);
}
}
}
P.S. I highly recommend to use Selenoid for running e2e tests in containers.

Protractor tests fail, window too small

I have the below bit of config set up for my Protractor tests. Even though I specify to set size of the window to 1600x1000px, it doesn't happen every time. Really often my tests fail because the window isn't resized and is really tiny. Some elements become unreachable because of this.
browser.manage().window().setSize(1600, 1000); should resize the window every time but sometimes ignores it? Is there a reason why it's happening?
exports.config = {
onPrepare: function() {
var location =(browser.params.logFileLocation==undefined?'':browser.params.logFileLocation);
browser.manage().window().setSize(1600, 1000);
jasmine.getEnv().addReporter(
new Jasmine2HtmlReporter({
savePath: './results'+ location,
takeScreenshots: true,
takeScreenshotsOnlyOnFailures: true,
showPassed: false,
fileName: 'test-results'
})
);
.
.
.
}
};
You can try to set the screen size in your protractor.conf in capabilities.
For Example:
capabilities: {
browserName: 'chrome',
chromeOptions: {
args: ['--window-size=1600,1000']
//or Fullscreen
//args:["--start-maximized", "--start-fullscreen"],
}
}
this will maximize your window
try this out
browser.driver.manage().window().maximize();

How can I close my browser instance if my test fails in Nightwatch framework

I want to close my browser instance whenever my test scenario fails,
The issue currently is my test execution proceeds to next scenario and because my last window is still looking for a locator or etc, even my next scenario fails.
Is there a way I can close my browser window if my test scenario fails in Nightwatch?
test_settings: {
default: {
launch_url: 'http://localhost',
page_objects_path: './e2e-nightwatch/ionic/objects',
selenium_host: '127.0.0.1',
selenium_port: 4444,
request_timeout_options: {
timeout: 7000,
retry_attempts: 5
},
end_session_on_fail:true,
skip_testcases_on_fail:true,
disable_colors: false,
screenshots: {
enabled: true,
on_failure: true,
on_error: true,
path: './e2e-nightwatch/ionic/screenshots'
},
desiredCapabilities: {
browserName: 'chrome',
javascriptEnabled: true,
acceptSslCerts: true,
"chromeOptions": {
"args": ["start-maximized"]
}
},
},
What you are probably observing, is not a failure but error (speaking Nightwatch language). This could be execution error, exception or something like that:
ERROR: Unable to locate element: "#someElement" using: css selector
This is not considered failure (and failure actually means failed assertion), thus skip_testcases_on_fail option won't have any effect (btw: this option is set to true by default).
There are 2 ways to solve this.
First (preferred):
Before performing any actions on elements, for instance doing .click('#someElement'), the good practice is to run assertion on the same selector:
// fails immediately if element is not present
browser.assert.elementPresent('#someElement')
// wait for element to be present for specified time in ms
browser.waitForElementPresent('#someElement', 2000)
Both methods will stop execution of your test run in case element is not present.
Second:
In case you don't want to use the first method for some reason or just curious, there is another way to stop execution of entire test run.
Within the test case you can define afterEach function which is fired after each test case completes. Inside this function you have access to the object containing stats of the current test run. What you can do is to simply check if error count is greater than 0 and pass error to the done() callback which will lead into interruption of entire test run:
module.exports = {
'Test case #1': function(browser) {},
afterEach: function(browser, done) {
done(browser.currentTest.results.errors ? new Error('Errors occured during the last test run') : null);
}
};
Let me know if it worked for you.
Yeah you can use skip_testcases_on_fail for this. In your config just add
skip_testcases_on_fail: true
This will finish out the current test step and then exit before executing any additional steps. You can see a full list of test settings here.
You might also be able to use end_session_on_fail. Depends on how you want it to work.
Driver.Dispose();
Closes the browser.

Jasmine test makes no pass/fail report under webdriver.io

Running the following jasmine test under webdriver.io like this: node path/to/test/script.js, the test executes (web browser is pulled up, target page visited), and thanks to the last line, the jasmine 'it' functions (below) do execute (without the last line, they don't, although the 'describe' function still runs).
But jasmine doesn't provide any kind of report result for the 'it' tests and the 'expect' assertions; there's nothing on the console from jasmine. There's no 'pass/fail' result, and so forth.
How to get jasmine to make a report, and esp. one that is readable by Jenkins?
The problem test script:
var webdriverjs = require('foo-bar/node_modules/webdriverio');
var jasmine = require('foo-bar/node_modules/jasmine-node');
var options = {
port: 4445,
desiredCapabilities: {
browserName: process.argv[2] || 'phantomjs'
}
};
describe('my webdriverjs tests', function () {
var client;
jasmine.DEFAULT_TIMEOUT_INTERVAL = 9999999;
beforeEach(function() {
client = webdriverjs.remote(options);
client.init();
});
it('shows the correct title', function (done) {
client
.url('http://localhost:4444').getTitle(function(err, title) {
expect(title).toBe('foo bar');
}).call( done );
});
afterEach(function(done) {
client.end(done);
});
});
jasmine.getEnv().execute();
Note: Cross-posted here: https://groups.google.com/forum/#!topic/webdriverio/-EOrQ003B9I
I ran into some of the same challenges when I was looking into this. The big issue is that this test needs to be executed as a jasmine test, not a webdriver test.
decribe('my webdriverio tests with jasmine', function(){
var client;
beforeEach(function(){
client = require('path/to/webdriverio').remote({
desiredCapabilities: {browserName:'safari'}
}).init.url('https://www.stackoverflow');
}, 5000);
afterEach(function(done){
client.end(done);
}, 5000);
it('runs a very simple test',function(done){
client.getTitle(function(err,result){
expect(result).toBe('Stack Overflow');
}).call(done);
}, 5000);
});
Now to run this test, you would just run a typical jasmine-node command from your terminal.
It comes down to the naming convention you are using. First, you need to remove the last line: jasmine.getEnv().execute(); then run the jasmine-node command with the --matchall flag:
jasmine-node --matchall path/to/test/script.js
If you named your file script_spec.js, then you could run it without the --matchall flag.
This is also assuming you have jasmine-node installed globally. If you want to use the local node_modules dependency, then you need to run this command:
./node_modules/jasmine-node/bin/jasmine-node --matchall path/to/test/script.js
When you are using jasmine-node module you should run your spec with
node_modules/jasmine-node/bin/jasmine-node $TEST_DIRECTORY
And your test should end with *spec.js, *spec.coffee or *spec.litcoffee as docs said.
And jasmine.getEnv().execute(); and var jasmine = require('foo-bar/node_modules/jasmine-node'); should not be in your script.