I have used nightwatch.js to automate e2e test cases for my product. It worked very well on chrome, firefox, and other UI based browser. However, I need to run it on phantom.js to run it part of Jenkins as a headless browser for automation.
I tried however test script is not working with phantom.js.
Test Script:
describe('TEST PHANTOMJS#',function() {
afterEach((client,done) => {
client.end(() => done());
});
it('successful test google.com',(client)=> {
// Launch google.com
client.url('https://www.google.com').resizeWindow(1000,800);
console.log('Launched Google')
client.expect.element('body1').to.be.present.before(1000); // test error
console.log('Completed testing')
});
});
My nightwatch.json configuration:
{
"src_folders": [
"tests"
],
"output_folder": "reports",
"custom_commands_path": "",
"custom_assertions_path": "",
"page_objects_path": "",
"selenium": {
"start_process": true,
"server_path": "./bin/selenium/selenium-server-standalone-3.0.1.jar",
"log_path": "",
"port": 4444,
"cli_args": {
"webdriver.chrome.driver": "./bin/chrome/chromedriver",
"webdriver.gecko.driver": "./bin/firefox/geckodriver",
"webdriver.edge.driver": "./bin/ie/IEDriverServer.exe"
}
},
"test_settings": {
"default": {
"selenium_port": 4444,
"selenium_host": "localhost",
"default_path_prefix": "/wd/hub",
"silent": true,
"screenshots": {
"enabled": true,
"on_failure": true,
"path": "./screen-shots"
},
"desiredCapabilities": {
"browserName": "phantomjs",
"javascriptEnabled": true,
"acceptSslCerts": true,
"phantomjs.binary.path": "./node_modules/phantomjs-prebuilt/bin/phantomjs",
"phantomjs.page.settings.userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.80 Safari/537.36",
"phantomjs.cli.args": []
},
"test_runner": {
"type": "mocha",
"options": {
"ui": "bdd",
"reporter": "list"
}
}
}
}
}
After running ./node_modules/.bin/nightwatch --env qa --verbose I see following log
> nightwatch --env qa --verbose
Starting selenium server... started - PID: 11037
TEST PHANTOMJS# successful test google.com: Launched Google
Completed testing
INFO Request: POST /wd/hub/session
- data: {"desiredCapabilities":{"browserName":"phantomjs","javascriptEnabled":true,"acceptSslCerts":true,"platform":"ANY","phantomjs.binary.path":"./node_modules/phantomjs-prebuilt/bin/phantomjs","phantomjs.page.settings.userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.80 Safari/537.36","phantomjs.cli.args":[]}}
- headers: {"Content-Type":"application/json; charset=utf-8","Content-Length":372}
INFO Response 200 POST /wd/hub/session (1409ms) { state: null,
sessionId: 'd16c7439-18ec-4b67-85eb-e3dda6fe0075',
hCode: 1253002783,
value:
{ applicationCacheEnabled: false,
rotatable: false,
handlesAlerts: false,
databaseEnabled: false,
version: '2.1.1',
platform: 'MAC',
browserConnectionEnabled: false,
proxy: { proxyType: 'direct' },
nativeEvents: true,
acceptSslCerts: false,
driverVersion: '1.2.0',
'webdriver.remote.sessionid': 'd16c7439-18ec-4b67-85eb-e3dda6fe0075',
'phantomjs.page.settings.userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.80 Safari/537.36',
locationContextEnabled: false,
webStorageEnabled: false,
browserName: 'phantomjs',
takesScreenshot: true,
driverName: 'ghostdriver',
javascriptEnabled: true,
cssSelectorsEnabled: true },
class: 'org.openqa.selenium.remote.Response',
status: 0 }
INFO Got sessionId from selenium d16c7439-18ec-4b67-85eb-e3dda6fe0075
INFO Request: POST /wd/hub/session/d16c7439-18ec-4b67-85eb-e3dda6fe0075/url
- data: {"url":"https://www.google.com"}
- headers: {"Content-Type":"application/json; charset=utf-8","Content-Length":32}
Ideally, it should complete the testing with reporting an error. However, it got stuck and doesn't move further.
Any help would be appreciated.
When you use PhantomJS on a website with HTTPS, you generally have to run your script with the ignore-ssl-errors option. Otherwise, you will often get in trouble... If your script works with all graphical browsers, but not with PhantomJS, your issue is most likely related to SSL/TLS.
In nightwatch.json, where you configure PhantomJS, make sure to add the CLI option:
"phantomjs.cli.args": ["--ignore-ssl-errors=true"]
The following script does not work without the option (it does not print the page title), but it works when you add it:
module.exports = {
'PhantomJS': function(browser) {
browser
.url('https://www.google.com')
.waitForElementVisible('body', 1000)
.getTitle(function (title) {
console.log(title);
})
.end();
}
};
Related
Using below configuration as capabilities for the remote firefox browser(saucelabs)-
12:34:32.312 INFO [ActiveSessionFactory.apply] - Capabilities are: {
"browserName": "firefox",
"moz:firefoxOptions": {
"profile": "Test1",
"prefs": {
"devtools.chrome.enabled": false,
"media.webrtc.hw.h264.enabled": true,
"media.peerconnection.ice.tcp": true,
"dom.webnotifications.enabled": false,
"media.gmp-manager.updateEnabled": true,
"devtools.console.stdout.content": true,
"media.getusermedia.screensharing.enabled": true,
"devtools.debugger.remote-enabled": true,
"media.peerconnection.video.h264_enabled": true,
"marionette": true,
"media.navigator.permission.disabled": true,
"devtools.debugger.prompt-connection": false,
"media.navigator.streams.fake": true
},
"log": {
"level": "trace"
},
"args": [
"--use-fake-device-for-media-stream",
"--use-fake-ui-for-media-stream",
"--start-maximized",
"--disable-web-security",
"enable-logging",
"-start-debugger-server",
"9222"
]
},
"selenium:webdriver.remote.quietExceptions": false
}
Yet unable to get the console info into the selenium-driver.log in saucelabs. Even setting the option "devtools.console.stdout.content": true also not yielding any file with geckodriver.log.
With the same settings, chrome console logs are being captured as part of selenium-driver.log. Is there anything to be configured for getting console output for remote firefox?
Any input on this is very much appreciated.
I'm using Nightwatch, Selenium, and Chrome Driver to conduct automated UI testing. Sometimes there are test failures on any number of remote environments. To debug the failures locally, I would like to keep the browser open on test failure.
I used to be able to do this using config settings in nightwatch.json and nightwatch.conf.js, but I lost my config settings cheat-sheet in a hard drive failure, and despite my best efforts on google I can't seem to find the right combination again.
If I remember correctly, it wasn't any kind of keepBrowserOpen: true type flag. If memory serves, it was some type of timeout set to a silly high number combined with something else like detachDriver. I've checked the Selenium documentation with specific focus on the Chrome Driver Options, and the list of Chrome Driver options, but I haven't found any working combinations.
What am I missing?
Here is my nightwatch.json
{
"output_folder": "tests/automation/reports",
"end_session_on_fail": false,
"test_settings": {
"default": {
"selenium_port": 4444,
"screenshots": {
"enabled": true,
"path": "tests/automation/reports/screenshots"
},
"javascriptEnabled": true,
"acceptSslCerts": true,
"acceptInsecureCerts": true,
"globals": {
"waitForConditionTimeout": 10000
}
},
"chrome": {
"extends": "default",
"desiredCapabilities": {
"browserName": "chrome",
"javascriptEnabled": true,
"acceptSslCerts": true,
"acceptInsecureCerts": true,
"chromeOptions": {
"args": [
"--no-sandbox",
"--disable-gpu"
]
}
}
}
}
}
And my nightwatch.conf.js
const geckoDriver = require('geckodriver');
const chromeDriver = require('chromedriver');
const seleniumServer = require('selenium-server');
module.exports = (function (settings) {
settings.selenium = {
start_process: true,
server_path: seleniumServer.path,
log_path: 'tests/automation/reports/',
host: 'localhost',
port: 4444,
cli_args: {
'webdriver.chrome.driver': chromeDriver.path,
'webdriver.gecko.driver': geckoDriver.path,
'webdriver.edge.driver': 'C:\\windows\\system32\\MicrosoftWebDriver.exe',
'webdriver.port': '4444'
}
};
return settings;
})(require('./nightwatch.json'));
Selenium Version is 2.35.1
This is not an elegant solution but close to what you are looking for. Basically, we can check the value of browser.currentTest.results.errors in afterEach() and in case if the value is more than 0, then we can just pause the test there, keeping the browser session alive.
afterEach: function(browser) {
if(browser.currentTest.results.errors > 0) {
browser.pause()
}
I have setup up my wdio.conf.js to use multiple browsers in my tests as described on the WebdriverIO website. (the capabilities are defined as an object, if using multiremote feature)
In my spec file, when I try to perform an action, such as MyFirefoxBrowser.url('https://myUrl') ... (what is happening?)
!Note: When I refer to the browser object directly, both Chrome and Firefox instances are spawned, as expected.
I have tried referring the wdio.conf.js file inside my spec file using require, but it didn't work.
Spec file (.js):
describe('webdriver.io page', () => {
it('should have the right title', () => {
myFirefoxBrowser.url('https://webdriver.io');
const title = myFirefoxBrowser.getTitle()
assert.strictEqual(title, 'WebdriverIO ยท Next-gen WebDriver test framework for Node.js')
});
});
Capabilities (as defined in the wdio.conf.js):
capabilities: {
myChromeBrowser: {
capabilities: {
browserName1: 'chrome'
}
},
myFirefoxBrowser: {
capabilities: {
browserName: 'firefox'
}
}
},
Error:
ReferenceError: mychromeBrowser is not defined
Expected Results: Only the Firefox browser should navigate to the requested url.
myChromeBrowser != mychromeBrowser. Check this. Also not sure what do you want to do with this. browser is object for every browser and if you want to decide on type of it, you can via browser.capabilities
and you will get object
{ acceptInsecureCerts: false,
browserName: 'chrome',
browserVersion: '78.0.3904.97',
chrome:
{ chromedriverVersion:
'78.0.3904.70 (edb9c9f3de0247fd912a77b7f6cae7447f6d3ad5-refs/branch-heads/3904#{#800})',
userDataDir: '/tmp/.com.google.Chrome.Ihm6cA' },
'goog:chromeOptions': { debuggerAddress: 'localhost:45423' },
networkConnectionEnabled: false,
pageLoadStrategy: 'normal',
platformName: 'linux',
proxy: {},
setWindowRect: true,
strictFileInteractability: false,
timeouts: { implicit: 0, pageLoad: 300000, script: 30000 },
unhandledPromptBehavior: 'dismiss and notify' }
You should use browser.
capabilities should be defined as an array
// wdio.conf.js
exports.config = {
// ...
capabilities: [{
browserName: 'firefox'
}, {
browserName: 'chrome',
// ...
proxy: {
proxyType: "manual",
httpProxy: "corporate.proxy:8080",
socksUsername: "codeceptjs",
socksPassword: "secret",
noProxy: "127.0.0.1,localhost"
},
// ...
}],
// ...
}
Late to the party, I think you have a typo. It should be browserName in capabilities.myChromeBrowser and not browserName1. Otherwise the code should do as you want.
capabilities:
{
myChromeBrowser: {
capabilities: {
browserName: 'chrome'
}
},
myFirefoxBrowser: {
capabilities: {
browserName: 'firefox'
}
}
},
I am currently trying to set up webdriver and selenium together to run my automation tests through docker but am facing issues. Each time i try to run the tests I get the following error ERROR wdio-runner: Error: connect ECONNREFUSED 127.0.0.1:4444
I am using the selenium/standalone-chrome from docker and can see that the server is configured correctly and running at 127.0.0.1:4444 as I can hit it.
When I try to run webdriver however I seem to face the issue mentioned above. I believe the issue must be something in my webdriver config but having followed the documentation I can't see what's wrong...
const chai = require('chai');
const chaiWebdriver = require('chai-webdriverio').default;
const debug = process.env.DEBUG;
exports.config = {
runner: 'local',
host: '127.0.0.1',
port: 4444,
path: '/wd/hub',
specs: ['specs/**/*.js'],
suites: {
smoke: ['specs/smoke-spec.js']
},
maxInstances: 10,
capabilities: {
browserName: 'chrome',
'goog:chromeOptions': {}
},
sync: true,
logLevel: 'error',
coloredLogs: true,
deprecationWarnings: false,
bail: 0,
debug,
execArgv: debug ? ['--inspect'] : [],
screenshotOnReject: true,
screenshotPath: './error-screenshots',
baseUrl: https://localhost:443,
waitforTimeout: 30000,
connectionRetryTimeout: 90000,
connectionRetryCount: 3,
seleniumLogs: './selenium-logs',
framework: 'mocha',
reporters: [
[
'allure',
{
outputDir: 'test-output',
disableWebdriverStepsReporting: true,
disableWebdriverScreenshotsReporting: false
}
],
['spec', {}]
],
mochaOpts: {
ui: 'bdd',
timeout: 400000,
compilers: ['js:babel-register']
},
before() {
chai.use(chaiWebdriver(browser));
global.expect = chai.expect;
},
afterTest: test => {
if (!test.passed) {
browser.takeScreenshot();
}
}
};
Is anyone else having this problem? It seems that somewhere goog:chromeOptions are not getting passed to chromedriver properly - which results in Chrome just opening with default options.
I'm using the following Capybara/Selenium configuration:
Capybara.register_driver :chrome do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
'goog:chromeOptions': {
args: %w[ start-maximized ]
}
)
Capybara::Selenium::Driver.new(
app,
browser: :chrome,
desired_capabilities: capabilities,
driver_opts: {
log_path: "./tmp/chrome#{Time.now.to_i}.log",
verbose: true
}
)
end
Capybara.javascript_driver = :chrome
However, when I open up the Capybara session log, the goog:chromeOptions have not been set at all. Is this why my Chrome window is not maximized?
Session log:
[1550680994.143][INFO]: COMMAND InitSession {
"capabilities": {
"firstMatch": [ {
"browserName": "chrome",
"goog:chromeOptions": {
//nothing is here??? should have args!
}
} ]
},
"desiredCapabilities": {
"browserName": "chrome",
"cssSelectorsEnabled": true,
"goog:chromeOptions": {
//nothing is here??? should have args!
},
"javascriptEnabled": true,
"nativeEvents": false,
"platform": "ANY",
"rotatable": false,
"takesScreenshot": false,
"version": ""
}
}
Operating System:
Ubuntu 18.04
My environment:
ruby 2.6.1
capybara (2.18.0)
selenium-webdriver (3.13.0)
ChromeDriver 2.37.544315
Everytime I run a Selenium test, the window is not maximized. But this isn't another "the screen isn't maximized" post (there are lots of those already). This appears to be an issue where my options are not being parsed properly perhaps? I don't get whats wrong. I'm following all the README's and guides as best I can. Its just not working :(
Fixed it by upgrading Capybara and Selenium Webdriver!
capybara (3.13.2)
selenium-webdriver (3.141.0)
Now the debug log has what I was expecting to see:
[1550686685.534][INFO]: COMMAND InitSession {
"capabilities": {
"firstMatch": [ {
"browserName": "chrome",
"goog:chromeOptions": {
"args": [ "start-maximized" ]
}
} ]
},
"desiredCapabilities": {
"browserName": "chrome",
"cssSelectorsEnabled": true,
"goog:chromeOptions": {
"args": [ "start-maximized" ]
},
"javascriptEnabled": true,
"nativeEvents": false,
"platform": "ANY",
"rotatable": false,
"takesScreenshot": false,
"version": ""
}
}