how to stop selenium installation on "onPrepare" hook when using webdriver IO (wdio V7) - selenium

when i run my test using wdio it fails on "onPrepare" hook as it tries to install selenium server by throwing this error
2021-06-01T16:10:07.130Z INFO #wdio/cli:launcher: Run onPrepare hook
Error in "getDownloadStream". Could not download https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar
See more details below:
connect ETIMEDOUT 142.250.74.112:443
2021-06-01T16:11:22.829Z ERROR #wdio/cli:utils: A service failed in the 'onPrepare' hook
RequestError: connect ETIMEDOUT 142.250.74.112:443
at ClientRequest.<anonymous> (/Users/test/node_modules/got/dist/source/core/index.js:956:111)
at Object.onceWrapper (events.js:421:26)
at ClientRequest.emit (events.js:326:22)
at ClientRequest.EventEmitter.emit (domain.js:483:12)
at ClientRequest.origin.emit (/Users/test/node_modules/#szmarczak/http-timer/dist/source/index.js:39:20)
at TLSSocket.socketErrorListener (_http_client.js:427:9)
at TLSSocket.emit (events.js:314:20)
at TLSSocket.EventEmitter.emit (domain.js:483:12)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
the issue is i don't even want wdio to try and install selenium because i have already installed it. On WDIO v6 this use to work, if i installed selenium myself before running the test then i don't get this error.
so what i wanna know is is there a way to stop/skip selenium installation on onPrepare hook?
this is my config file WDIO
require('#babel/register');
require('#babel/polyfill');
const drivers = {
proxy: process.env.proxy,
baseURL: 'https://selenium-release.storage.googleapis.com',
version: '3.141.59',
ignoreExtraDrivers: true,
drivers: {
chrome: {
version: '88.0.4324.96',
arch: process.arch,
baseURL: 'https://chromedriver.storage.googleapis.com'
},
firefox: {
version: '0.25.0',
arch: process.arch,
baseURL: 'https://github.com/mozilla/geckodriver/releases/download'
}
}
};
exports.config = {
runner: 'local',
specs: ['./browser-tests/specs/**/*.spec.js'],
exclude: [
// 'path/to/excluded/files'
],
maxInstances: 10,
capabilities: [
{
maxInstances: 5,
//
browserName: 'firefox',
acceptInsecureCerts: true
}
],
logLevel: 'debug',
bail: 0,
baseUrl: 'http://localhost:3000',
waitforTimeout: 10000,
connectionRetryTimeout: 120000,
connectionRetryCount: 3,
skipSeleniumInstall: true,
services: [
[
'selenium-standalone',
{
args: { drivers } // drivers to use
}
]
],
framework: 'jasmine',
reporters: ['spec'],
};

I had file wdio.conf.js created automatically at the beginning of installing wdio.
exports.config = {
//
// ====================
// Runner Configuration
// ====================
//
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called.
//
// The specs are defined as an array of spec files (optionally using wildcards
// that will be expanded). The test for each spec file will be run in a separate
// worker process. In order to have a group of spec files run in the same worker
// process simply enclose them in an array within the specs array.
//
// If you are calling `wdio` from an NPM script (see https://docs.npmjs.com/cli/run-script),
// then the current working directory is where your `package.json` resides, so `wdio`
// will be called from there.
//
specs: [
'./test/specs/**/*.js'
],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude options in
// order to group specific specs to a specific capability.
//
// First, you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have
// set maxInstances to 1; wdio will spawn 3 processes. Therefore, if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property handles how many capabilities
// from the same test should run tests.
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://saucelabs.com/platform/platform-configurator
//
capabilities: [{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 5,
//
browserName: 'chrome',
acceptInsecureCerts: true
// If outputDir is provided WebdriverIO can capture driver session logs
// it is possible to configure which logTypes to include/exclude.
// excludeDriverLogs: ['*'], // pass '*' to exclude all driver session logs
// excludeDriverLogs: ['bugreport', 'server'],
}],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'info',
//
// Set specific log levels per logger
// loggers:
// - webdriver, webdriverio
// - #wdio/browserstack-service, #wdio/devtools-service, #wdio/sauce-service
// - #wdio/mocha-framework, #wdio/jasmine-framework
// - #wdio/local-runner
// - #wdio/sumologic-reporter
// - #wdio/cli, #wdio/config, #wdio/utils
// Level of logging verbosity: trace | debug | info | warn | error | silent
// logLevels: {
// webdriver: 'info',
// '#wdio/appium-service': 'info'
// },
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'http://localhost',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 120000,
//
// Default request retries count
connectionRetryCount: 3,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: ['selenium-standalone'],
// Framework you want to run your specs with.
// The following are supported: Mocha, Jasmine, and Cucumber
// see also: https://webdriver.io/docs/frameworks
//
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'mocha',
//
// The number of times to retry the entire specfile when it fails as a whole
// specFileRetries: 1,
//
// Delay in seconds between the spec file retry attempts
// specFileRetriesDelay: 0,
//
// Whether or not retried specfiles should be retried immediately or deferred to the end of the queue
// specFileRetriesDeferred: false,
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter
reporters: ['spec'],
//
// Options to be passed to Mocha.
// See the full list at http://mochajs.org/
mochaOpts: {
ui: 'bdd',
timeout: 60000
},
//
// =====
// Hooks
// =====
// WebdriverIO provides several hooks you can use to interfere with the test process in order to enhance
// it and to build services around it. You can either apply a single function or an array of
// methods to it. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
/**
* Gets executed once before all workers get launched.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
*/
// onPrepare: function (config, capabilities) {
// },
/**
* Gets executed before a worker process is spawned and can be used to initialise specific service
* for that worker as well as modify runtime environments in an async fashion.
* #param {String} cid capability id (e.g 0-0)
* #param {[type]} caps object containing capabilities for session that will be spawn in the worker
* #param {[type]} specs specs to be run in the worker process
* #param {[type]} args object that will be merged with the main configuration once worker is initialized
* #param {[type]} execArgv list of string arguments passed to the worker process
*/
// onWorkerStart: function (cid, caps, specs, args, execArgv) {
// },
/**
* Gets executed just after a worker process has exited.
* #param {String} cid capability id (e.g 0-0)
* #param {Number} exitCode 0 - success, 1 - fail
* #param {[type]} specs specs to be run in the worker process
* #param {Number} retries number of retries used
*/
// onWorkerEnd: function (cid, exitCode, specs, retries) {
// },
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {String} cid worker id (e.g. 0-0)
*/
// beforeSession: function (config, capabilities, specs, cid) {
// },
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {Object} browser instance of created browser/device session
*/
// before: function (capabilities, specs) {
// },
/**
* Runs before a WebdriverIO command gets executed.
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
*/
// beforeCommand: function (commandName, args) {
// },
/**
* Hook that gets executed before the suite starts
* #param {Object} suite suite details
*/
// beforeSuite: function (suite) {
// },
/**
* Function to be executed before a test (in Mocha/Jasmine) starts.
*/
// beforeTest: function (test, context) {
// },
/**
* Hook that gets executed _before_ a hook within the suite starts (e.g. runs before calling
* beforeEach in Mocha)
*/
// beforeHook: function (test, context) {
// },
/**
* Hook that gets executed _after_ a hook within the suite starts (e.g. runs after calling
* afterEach in Mocha)
*/
// afterHook: function (test, context, { error, result, duration, passed, retries }) {
// },
/**
* Function to be executed after a test (in Mocha/Jasmine only)
* #param {Object} test test object
* #param {Object} context scope object the test was executed with
* #param {Error} result.error error object in case the test fails, otherwise `undefined`
* #param {Any} result.result return object of test function
* #param {Number} result.duration duration of test
* #param {Boolean} result.passed true if test has passed, otherwise false
* #param {Object} result.retries informations to spec related retries, e.g. `{ attempts: 0, limit: 0 }`
*/
// afterTest: function(test, context, { error, result, duration, passed, retries }) {
// },
/**
* Hook that gets executed after the suite has ended
* #param {Object} suite suite details
*/
// afterSuite: function (suite) {
// },
/**
* Runs after a WebdriverIO command gets executed
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
* #param {Number} result 0 - command success, 1 - command error
* #param {Object} error error object if any
*/
// afterCommand: function (commandName, args, result, error) {
// },
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* #param {Number} result 0 - test pass, 1 - test fail
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// after: function (result, capabilities, specs) {
// },
/**
* Gets executed right after terminating the webdriver session.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// afterSession: function (config, capabilities, specs) {
// },
/**
* Gets executed after all workers got shut down and the process is about to exit. An error
* thrown in the onComplete hook will result in the test run failing.
* #param {Object} exitCode 0 - success, 1 - fail
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {<Object>} results object containing test results
*/
// onComplete: function(exitCode, config, capabilities, results) {
// },
/**
* Gets executed when a refresh happens.
* #param {String} oldSessionId session ID of the old session
* #param {String} newSessionId session ID of the new session
*/
// onReload: function(oldSessionId, newSessionId) {
// }
}
Commented out this line: "services: ['selenium-standalone']," and stopped getting this error: "INFO #wdio/cli:launcher: Run onPrepare hook"
Hope this would help somebody.

Related

In webdriver-io, how to separate specs from source

I'm using webdriver-io + browserstack (TS/JS project), and I'd like to run my tests against the bundled production output.
Settings specs to the bundled output:
specs: ['dist/test/index.js'],
If I create two separate js files, one for the source and one for the tests, the test is skipped:
INFO #wdio/cli: [0-0] SKIPPED in chrome - /dist/test/index.js
If I bundle everything together, it works.
Pouring over the documentation I can't seem to find the correct way to include source files.
When I use the jasmine browser runner, I can separate the two via srcFiles
{
srcDir: 'dist',
srcFiles: ['index.umd.js'],
specDir: 'dist/test',
specFiles: ['index.js'],
browser: browser,
}
My full wdio.conf.ts:
import type { Options } from '#wdio/types'
import { config as dotEnvConfig } from 'dotenv'
dotEnvConfig()
if (!process.env.BROWSERSTACK_USERNAME || !process.env.BROWSERSTACK_ACCESS_KEY)
throw new Error(
'missing .env BROWSERSTACK_USERNAME or BROWSERSTACK_ACCESS_KEY'
)
export const config: Options.Testrunner = {
//
// ====================
// Runner Configuration
// ====================
//
//
// =====================
// ts-node Configurations
// =====================
//
// You can write tests using TypeScript to get autocompletion and type safety.
// You will need typescript and ts-node installed as devDependencies.
// WebdriverIO will automatically detect if these dependencies are installed
// and will compile your config and tests for you.
// If you need to configure how ts-node runs please use the
// environment variables for ts-node or use wdio config's autoCompileOpts section.
//
user: process.env.BROWSERSTACK_USERNAME,
key: process.env.BROWSERSTACK_ACCESS_KEY,
autoCompileOpts: {
autoCompile: true,
// see https://github.com/TypeStrong/ts-node#cli-and-programmatic-options
// for all available options
tsNodeOpts: {
transpileOnly: true,
project: 'tsconfig.json',
require: ['tsconfig-paths/register'],
},
// tsconfig-paths is only used if "tsConfigPathsOpts" are provided, if you
// do please make sure "tsconfig-paths" is installed as dependency
// tsConfigPathsOpts: {
// baseUrl: './',
// paths: {
// },
// },
},
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called.
//
// The specs are defined as an array of spec files (optionally using wildcards
// that will be expanded). The test for each spec file will be run in a separate
// worker process. In order to have a group of spec files run in the same worker
// process simply enclose them in an array within the specs array.
//
// If you are calling `wdio` from an NPM script (see https://docs.npmjs.com/cli/run-script),
// then the current working directory is where your `package.json` resides, so `wdio`
// will be called from there.
//
/**
* Running against the bundled specs is an order of magnitude faster than running against
* all ts source files.
*/
specs: ['dist/test/index.js'],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude options in
// order to group specific specs to a specific capability.
//
// First, you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have
// set maxInstances to 1; wdio will spawn 3 processes. Therefore, if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property handles how many capabilities
// from the same test should run tests.
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://saucelabs.com/platform/platform-configurator
//
capabilities: [
{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 5,
//
browserName: 'chrome',
acceptInsecureCerts: true,
// If outputDir is provided WebdriverIO can capture driver session logs
// it is possible to configure which logTypes to include/exclude.
// excludeDriverLogs: ['*'], // pass '*' to exclude all driver session logs
// excludeDriverLogs: ['bugreport', 'server'],
},
],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'info',
//
// Set specific log levels per logger
// loggers:
// - webdriver, webdriverio
// - #wdio/browserstack-service, #wdio/devtools-service, #wdio/sauce-service
// - #wdio/mocha-framework, #wdio/jasmine-framework
// - #wdio/local-runner
// - #wdio/sumologic-reporter
// - #wdio/cli, #wdio/config, #wdio/utils
// Level of logging verbosity: trace | debug | info | warn | error | silent
// logLevels: {
// webdriver: 'trace',
// webdriverio: 'trace',
// '#wdio/local-runner': 'trace',
// '#wdio/cli': 'trace',
// '#wdio/browserstack-service': 'debug',
// },
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'http://localhost',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 120000,
//
// Default request retries count
connectionRetryCount: 3,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: ['browserstack'],
// Framework you want to run your specs with.
// The following are supported: Mocha, Jasmine, and Cucumber
// see also: https://webdriver.io/docs/frameworks
//
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'jasmine',
//
// The number of times to retry the entire specfile when it fails as a whole
// specFileRetries: 1,
//
// Delay in seconds between the spec file retry attempts
// specFileRetriesDelay: 0,
//
// Whether or not retried specfiles should be retried immediately or deferred to the end of the queue
// specFileRetriesDeferred: false,
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter
reporters: ['spec'],
//
// Options to be passed to Jasmine.
jasmineOpts: {
// Jasmine default timeout
defaultTimeoutInterval: 60000,
//
// The Jasmine framework allows interception of each assertion in order to log the state of the application
// or website depending on the result. For example, it is pretty handy to take a screenshot every time
// an assertion fails.
expectationResultHandler: function (passed, assertion) {
// do something
},
helpers: [
'test/unit/_setup/jsdom.ts',
'test/unit/_setup/consoleReporter.ts',
],
},
//
// =====
// Hooks
// =====
// WebdriverIO provides several hooks you can use to interfere with the test process in order to enhance
// it and to build services around it. You can either apply a single function or an array of
// methods to it. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
/**
* Gets executed once before all workers get launched.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
*/
// onPrepare: function (config, capabilities) {
// },
/**
* Gets executed before a worker process is spawned and can be used to initialise specific service
* for that worker as well as modify runtime environments in an async fashion.
* #param {String} cid capability id (e.g 0-0)
* #param {[type]} caps object containing capabilities for session that will be spawn in the worker
* #param {[type]} specs specs to be run in the worker process
* #param {[type]} args object that will be merged with the main configuration once worker is initialized
* #param {[type]} execArgv list of string arguments passed to the worker process
*/
// onWorkerStart: function (cid, caps, specs, args, execArgv) {
// },
/**
* Gets executed just after a worker process has exited.
* #param {String} cid capability id (e.g 0-0)
* #param {Number} exitCode 0 - success, 1 - fail
* #param {[type]} specs specs to be run in the worker process
* #param {Number} retries number of retries used
*/
// onWorkerEnd: function (cid, exitCode, specs, retries) {
// },
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {String} cid worker id (e.g. 0-0)
*/
// beforeSession: function (config, capabilities, specs, cid) {
// },
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {Object} browser instance of created browser/device session
*/
// before: function (capabilities, specs) {
// },
/**
* Runs before a WebdriverIO command gets executed.
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
*/
// beforeCommand: function (commandName, args) {
// },
/**
* Hook that gets executed before the suite starts
* #param {Object} suite suite details
*/
// beforeSuite: function (suite) {
// },
/**
* Function to be executed before a test (in Mocha/Jasmine) starts.
*/
// beforeTest: function (test, context) {
// },
/**
* Hook that gets executed _before_ a hook within the suite starts (e.g. runs before calling
* beforeEach in Mocha)
*/
// beforeHook: function (test, context) {
// },
/**
* Hook that gets executed _after_ a hook within the suite starts (e.g. runs after calling
* afterEach in Mocha)
*/
// afterHook: function (test, context, { error, result, duration, passed, retries }) {
// },
/**
* Function to be executed after a test (in Mocha/Jasmine only)
* #param {Object} test test object
* #param {Object} context scope object the test was executed with
* #param {Error} result.error error object in case the test fails, otherwise `undefined`
* #param {Any} result.result return object of test function
* #param {Number} result.duration duration of test
* #param {Boolean} result.passed true if test has passed, otherwise false
* #param {Object} result.retries informations to spec related retries, e.g. `{ attempts: 0, limit: 0 }`
*/
// afterTest: function(test, context, { error, result, duration, passed, retries }) {
// },
/**
* Hook that gets executed after the suite has ended
* #param {Object} suite suite details
*/
// afterSuite: function (suite) {
// },
/**
* Runs after a WebdriverIO command gets executed
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
* #param {Number} result 0 - command success, 1 - command error
* #param {Object} error error object if any
*/
// afterCommand: function (commandName, args, result, error) {
// },
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* #param {Number} result 0 - test pass, 1 - test fail
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// after: function (result, capabilities, specs) {
// },
/**
* Gets executed right after terminating the webdriver session.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// afterSession: function (config, capabilities, specs) {
// },
/**
* Gets executed after all workers got shut down and the process is about to exit. An error
* thrown in the onComplete hook will result in the test run failing.
* #param {Object} exitCode 0 - success, 1 - fail
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {<Object>} results object containing test results
*/
// onComplete: function(exitCode, config, capabilities, results) {
// },
/**
* Gets executed when a refresh happens.
* #param {String} oldSessionId session ID of the old session
* #param {String} newSessionId session ID of the new session
*/
// onReload: function(oldSessionId, newSessionId) {
// }
}

Is it possible to get response from an endpoint, while using UI automation? [duplicate]

Used userDataDir: 'path' in driver configure file but still files are not downloading in the specific location
Expected file to be downloaded in specific location
Can someone help me how to setup a specific path to download file in Karate in chrome like in selenium Chrome preference
You could use the experimental Browser.setDownloadBehavior Chrome DevTools Protocol method.
Here is a full example:
Feature:
Background:
* def waitForDownload = function(downloadPath) { while (!new java.io.File(downloadPath).isFile()) java.lang.Thread.sleep(1000) }
Scenario:
* configure driver = { type: 'chrome' }
* driver 'https://github.com/karatelabs/karate/releases/tag/v1.3.0'
* driver.send({ method: 'Browser.setDownloadBehavior', params: { behavior: 'allow', downloadPath: karate.toAbsolutePath('./someDir') } })
# scroll to bottom of page to ensure download link is created
* script('let x = (document.scrollingElement || document.body); x.scrollTop = x.scrollHeight')
* waitFor("a[href='/karatelabs/karate/archive/refs/tags/v1.3.0.tar.gz']").click()
* call waitForDownload karate.toAbsolutePath('./someDir/karate-1.3.0.tar.gz')

Swagger ExpressJS return No operations defined in spec

Hello I'm trying to use swagger to my express server but return 'No operations defined in spec!' I already watch like 5 different videos but still has no luck how to fix this
Configurations looks correct only some adjustments in definition and options should work, Here are the code you should re-arrange,
You should write yaml double spaced instead of Tab,
/**
* #swagger
* /api/v1/friends:
* get:
* tags:
* - "Healthcheck"
* summary: summary of your api endpoint.
* description: description of your api endpoint.
* responses:
* 200:
* description: List of topics.
*/
Pass swagger options with these configurations do change values based on your application, validate file path if specified accurately, Most values are optional see docs for more details
const swaggerOptions = {
openapi: '3.0.0',
swaggerDefinition: {
info: {
title: 'Friends API',
version: '1.0.0',
description: 'Friends api endpoints',
servers: ['https://your-api-serverhosturl.com'],
},
produces: ['application/json'],
},
apis: ['../modules/friends/*.routes.js'],
};
const swaggerDocs = swaggerJsDoc(swaggerOptions);
app.use('/api/api-docs', swaggerUI.serve, swaggerUI.setup(swaggerDocs));
Navigate to http://yourhost:port/api/api-docs

WebDriverIO: browser.pause does not working

I have used browser.sleep in protractor to hold the execution for a particular amount of time. In similar way I have tried the browser.pause in WebDriverIO. But it is not pausing for the given amount of time.
Even for browser pause I referred the WebDriverIO official documentation, there also the same thing is given
Step Definition Code:
Given(/^Verify the title of Salesforce web page$/,function(){
browser.url('https://login.salesforce.com/');
browser.pause(10000);
});
I use async mode in the configuration
WebDriverIO Version: ^5.22.4
wdio.config.js
exports.config = {
//
// ====================
// Runner Configuration
// ====================
//
// WebdriverIO allows it to run your tests in arbitrary locations (e.g. locally or
// on a remote machine).
runner: 'local',
//
// Override default path ('/wd/hub') for chromedriver service.
path: '/',
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called. Notice that, if you are calling `wdio` from an
// NPM script (see https://docs.npmjs.com/cli/run-script) then the current working
// directory is where your package.json resides, so `wdio` will be called from there.
//
specs: [
'./test/features/*.feature'
],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude options in
// order to group specific specs to a specific capability.
//
// First, you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have
// set maxInstances to 1; wdio will spawn 3 processes. Therefore, if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property handles how many capabilities
// from the same test should run tests.
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://docs.saucelabs.com/reference/platforms-configurator
//
capabilities: [{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 5,
//
browserName: 'chrome',
// If outputDir is provided WebdriverIO can capture driver session logs
// it is possible to configure which logTypes to include/exclude.
// excludeDriverLogs: ['*'], // pass '*' to exclude all driver session logs
// excludeDriverLogs: ['bugreport', 'server'],
}],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'info',
//
// Set specific log levels per logger
// loggers:
// - webdriver, webdriverio
// - #wdio/applitools-service, #wdio/browserstack-service, #wdio/devtools-service, #wdio/sauce-service
// - #wdio/mocha-framework, #wdio/jasmine-framework
// - #wdio/local-runner, #wdio/lambda-runner
// - #wdio/sumologic-reporter
// - #wdio/cli, #wdio/config, #wdio/sync, #wdio/utils
// Level of logging verbosity: trace | debug | info | warn | error | silent
// logLevels: {
// webdriver: 'info',
// '#wdio/applitools-service': 'info'
// },
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'https://login.salesforce.com/',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 90000,
//
// Default request retries count
connectionRetryCount: 0,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: ['chromedriver','firefox-profile'],
// Framework you want to run your specs with.
// The following are supported: Mocha, Jasmine, and Cucumber
// see also: https://webdriver.io/docs/frameworks.html
//
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'cucumber',
//
// The number of times to retry the entire specfile when it fails as a whole
// specFileRetries: 1,
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter.html
reporters: ['spec',['allure', {
outputDir: 'allure-results',
disableWebdriverStepsReporting: true,
disableWebdriverScreenshotsReporting: false,
useCucumberStepReporter:true
}]],
// If you are using Cucumber you need to specify the location of your step definitions.
cucumberOpts: {
require: ['./built/**/*.js'], // <string[]> (file/dir) require files before executing features
backtrace: false, // <boolean> show full backtrace for errors
requireModule: [], // <string[]> ("extension:module") require files with the given EXTENSION after requiring MODULE (repeatable)
dryRun: false, // <boolean> invoke formatters without executing steps
failFast: false, // <boolean> abort the run on first failure
format: ['pretty'], // <string[]> (type[:path]) specify the output format, optionally supply PATH to redirect formatter output (repeatable)
colors: true, // <boolean> disable colors in formatter output
snippets: true, // <boolean> hide step definition snippets for pending steps
source: true, // <boolean> hide source uris
profile: [], // <string[]> (name) specify the profile to use
strict: false, // <boolean> fail if there are any undefined or pending steps
tagExpression: '', // <string> (expression) only execute the features or scenarios with tags matching the expression
timeout: 60000, // <number> timeout for step definitions
ignoreUndefinedDefinitions: false, // <boolean> Enable this config to treat undefined definitions as warnings.
},
//
// =====
// Hooks
// =====
// WebdriverIO provides several hooks you can use to interfere with the test process in order to enhance
// it and to build services around it. You can either apply a single function or an array of
// methods to it. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
/**
* Gets executed once before all workers get launched.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
*/
// onPrepare: function (config, capabilities) {
// },
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
*/
// beforeSession: function (config, capabilities, specs) {
// },
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
*/
before: function (_capabilities) {
// =================
// Assertion Library
// =================
const chai = require('chai');
global.expect = chai.expect;
global.assert = chai.assert;
global.should = chai.should();
require('ts-node').register({ files: true });
},
/**
* Runs before a WebdriverIO command gets executed.
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
*/
// beforeCommand: function (commandName, args) {
// },
/**
* Runs before a Cucumber feature
*/
beforeFeature: function (uri, feature, scenarios) {
scenarioCounter = 0;
},
/**
* Runs before a Cucumber scenario
*/
// beforeScenario: function (uri, feature, scenario, sourceLocation) {
// },
/**
* Runs before a Cucumber step
*/
// beforeStep: function (uri, feature, stepData, context) {
// },
/**
* Runs after a Cucumber step
*/
afterStep: function (uri, feature, { error, result, duration, passed }, stepData, context) {
if (error !== undefined) {
browser.takeScreenshot();
}
},
/**
* Runs after a Cucumber scenario
*/
afterScenario: function (uri, feature, scenario, result, sourceLocation) {
scenarioCounter += 1;
addArgument('Scenario #', scenarioCounter);
},
/**
* Runs after a Cucumber feature
*/
// afterFeature: function (uri, feature, scenarios) {
// },
/**
* Runs after a WebdriverIO command gets executed
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
* #param {Number} result 0 - command success, 1 - command error
* #param {Object} error error object if any
*/
// afterCommand: function (commandName, args, result, error) {
// },
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* #param {Number} result 0 - test pass, 1 - test fail
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// after: function (result, capabilities, specs) {
// },
/**
* Gets executed right after terminating the webdriver session.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// afterSession: function (config, capabilities, specs) {
// },
/**
* Gets executed after all workers got shut down and the process is about to exit. An error
* thrown in the onComplete hook will result in the test run failing.
* #param {Object} exitCode 0 - success, 1 - fail
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {<Object>} results object containing test results
*/
// onComplete: function(exitCode, config, capabilities, results) {
// },
/**
* Gets executed when a refresh happens.
* #param {String} oldSessionId session ID of the old session
* #param {String} newSessionId session ID of the new session
*/
//onReload: function(oldSessionId, newSessionId) {
//}
}
Since, it was an async mode in WebDriverIO, have to add the await in both of the lines as below.
Given(/^Verify the title of Salesforce web page$/, async function(){
await browser.url('https://login.salesforce.com/');
await browser.pause(10000);
});
Might be worthy trying async/await as below:
Given(/^Verify the title of Salesforce web page$/, async function(){
await browser.url('https://login.salesforce.com/');
await browser.pause(10000);
});
I hope it helps
In my case someone had globally activated fake timers (jest.useFakeTimers()).
For now, in my End to end tests, I'm just switching back to real timers (jest.useRealTimers()).

karate- run time error in evaluation of 'karate-config.js' [duplicate]

I'm trying use karate for e2e tests and have started with a minimal setup. I want to create some config items in karate-config.js for use in the tests but karate is reporting that file is not a js function and hence the test fails trying to get the config:
Warning: Nashorn engine is planned to be removed from a future JDK release
12:16:35.264 [Test worker] WARN com.intuit.karate - not a js function or feature file: read('classpath:karate-config.js') - [type: NULL, value: null]
---------------------------------------------------------
feature: classpath:karate/insurer.feature
scenarios: 1 | passed: 0 | failed: 1 | time: 0.0163
---------------------------------------------------------
HTML report: (paste into browser to view) | Karate version: 0.9.1
file:/Users/srowatt/dev/repos/api/price-service/build/surefire-reports/karate.insurer.html
---------------------------------------------------------
-unknown-:4 - javascript evaluation failed: priceBaseUrl, ReferenceError: "priceBaseUrl" is not defined in <eval> at line number 1
org.opentest4j.AssertionFailedError: -unknown-:4 - javascript evaluation failed: priceBaseUrl, ReferenceError: "priceBaseUrl" is not defined in <eval> at line number 1
This is my karate-config.js:
function fn() {
return {
priceBaseUrl: "http://localhost:8080"
};
}
This is my insurer.feature test:
Feature: which creates insurers
Background:
* url priceBaseUrl
* configure logPrettyRequest = true
* configure logPrettyResponse = true
Scenario: basic roundtrip
# create a new insurer
Given path 'insurers'
And request { name: 'Sammy Insurance', companyCode: '99' }
When method post
Then status 201
And match response == { resourceId: '#number', version: 0, createdBy: 'anonymousUser' }
* def insurerId = response.resourceId
# get insurer by resource id
Given path 'insurers', insurerId
When method get
Then status 200
And match response == { id: '#(id)', name: 'Sammy Insurance', companyCode: '99' }
This is the InsurerTest.java test runner:
package karate;
import com.intuit.karate.junit5.Karate;
class InsurerTest {
#Karate.Test
public Karate testInsurer() {
return new Karate().feature("classpath:karate/insurer.feature");
}
}
Please use below code in the karate-config.js
function() {
return priceBaseUrl='http://localhost:8080';
}
When I see this:
Warning: Nashorn engine is planned to be removed from a future JDK release
I suspect you are on Java 9 or 11 ? To be honest, we haven't fully tested Karate on those versions of Java yet. Would it be possible for you to confirm that Java 8 (maybe 9 / 10 also) is OK.
That said, we are interested in resolving this as soon as possible, so if you can submit a sample project where we can replicate this, please do so: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
EDIT: Karate 1.0 will use GraalVM instead of Nashorn and will run on even JDK 16: https://software-that-matters.com/2021/01/27/7-new-features-in-karate-test-automation-version-1_0/