In webdriver-io, how to separate specs from source - webdriver-io

I'm using webdriver-io + browserstack (TS/JS project), and I'd like to run my tests against the bundled production output.
Settings specs to the bundled output:
specs: ['dist/test/index.js'],
If I create two separate js files, one for the source and one for the tests, the test is skipped:
INFO #wdio/cli: [0-0] SKIPPED in chrome - /dist/test/index.js
If I bundle everything together, it works.
Pouring over the documentation I can't seem to find the correct way to include source files.
When I use the jasmine browser runner, I can separate the two via srcFiles
{
srcDir: 'dist',
srcFiles: ['index.umd.js'],
specDir: 'dist/test',
specFiles: ['index.js'],
browser: browser,
}
My full wdio.conf.ts:
import type { Options } from '#wdio/types'
import { config as dotEnvConfig } from 'dotenv'
dotEnvConfig()
if (!process.env.BROWSERSTACK_USERNAME || !process.env.BROWSERSTACK_ACCESS_KEY)
throw new Error(
'missing .env BROWSERSTACK_USERNAME or BROWSERSTACK_ACCESS_KEY'
)
export const config: Options.Testrunner = {
//
// ====================
// Runner Configuration
// ====================
//
//
// =====================
// ts-node Configurations
// =====================
//
// You can write tests using TypeScript to get autocompletion and type safety.
// You will need typescript and ts-node installed as devDependencies.
// WebdriverIO will automatically detect if these dependencies are installed
// and will compile your config and tests for you.
// If you need to configure how ts-node runs please use the
// environment variables for ts-node or use wdio config's autoCompileOpts section.
//
user: process.env.BROWSERSTACK_USERNAME,
key: process.env.BROWSERSTACK_ACCESS_KEY,
autoCompileOpts: {
autoCompile: true,
// see https://github.com/TypeStrong/ts-node#cli-and-programmatic-options
// for all available options
tsNodeOpts: {
transpileOnly: true,
project: 'tsconfig.json',
require: ['tsconfig-paths/register'],
},
// tsconfig-paths is only used if "tsConfigPathsOpts" are provided, if you
// do please make sure "tsconfig-paths" is installed as dependency
// tsConfigPathsOpts: {
// baseUrl: './',
// paths: {
// },
// },
},
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called.
//
// The specs are defined as an array of spec files (optionally using wildcards
// that will be expanded). The test for each spec file will be run in a separate
// worker process. In order to have a group of spec files run in the same worker
// process simply enclose them in an array within the specs array.
//
// If you are calling `wdio` from an NPM script (see https://docs.npmjs.com/cli/run-script),
// then the current working directory is where your `package.json` resides, so `wdio`
// will be called from there.
//
/**
* Running against the bundled specs is an order of magnitude faster than running against
* all ts source files.
*/
specs: ['dist/test/index.js'],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude options in
// order to group specific specs to a specific capability.
//
// First, you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have
// set maxInstances to 1; wdio will spawn 3 processes. Therefore, if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property handles how many capabilities
// from the same test should run tests.
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://saucelabs.com/platform/platform-configurator
//
capabilities: [
{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 5,
//
browserName: 'chrome',
acceptInsecureCerts: true,
// If outputDir is provided WebdriverIO can capture driver session logs
// it is possible to configure which logTypes to include/exclude.
// excludeDriverLogs: ['*'], // pass '*' to exclude all driver session logs
// excludeDriverLogs: ['bugreport', 'server'],
},
],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'info',
//
// Set specific log levels per logger
// loggers:
// - webdriver, webdriverio
// - #wdio/browserstack-service, #wdio/devtools-service, #wdio/sauce-service
// - #wdio/mocha-framework, #wdio/jasmine-framework
// - #wdio/local-runner
// - #wdio/sumologic-reporter
// - #wdio/cli, #wdio/config, #wdio/utils
// Level of logging verbosity: trace | debug | info | warn | error | silent
// logLevels: {
// webdriver: 'trace',
// webdriverio: 'trace',
// '#wdio/local-runner': 'trace',
// '#wdio/cli': 'trace',
// '#wdio/browserstack-service': 'debug',
// },
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'http://localhost',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 120000,
//
// Default request retries count
connectionRetryCount: 3,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: ['browserstack'],
// Framework you want to run your specs with.
// The following are supported: Mocha, Jasmine, and Cucumber
// see also: https://webdriver.io/docs/frameworks
//
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'jasmine',
//
// The number of times to retry the entire specfile when it fails as a whole
// specFileRetries: 1,
//
// Delay in seconds between the spec file retry attempts
// specFileRetriesDelay: 0,
//
// Whether or not retried specfiles should be retried immediately or deferred to the end of the queue
// specFileRetriesDeferred: false,
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter
reporters: ['spec'],
//
// Options to be passed to Jasmine.
jasmineOpts: {
// Jasmine default timeout
defaultTimeoutInterval: 60000,
//
// The Jasmine framework allows interception of each assertion in order to log the state of the application
// or website depending on the result. For example, it is pretty handy to take a screenshot every time
// an assertion fails.
expectationResultHandler: function (passed, assertion) {
// do something
},
helpers: [
'test/unit/_setup/jsdom.ts',
'test/unit/_setup/consoleReporter.ts',
],
},
//
// =====
// Hooks
// =====
// WebdriverIO provides several hooks you can use to interfere with the test process in order to enhance
// it and to build services around it. You can either apply a single function or an array of
// methods to it. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
/**
* Gets executed once before all workers get launched.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
*/
// onPrepare: function (config, capabilities) {
// },
/**
* Gets executed before a worker process is spawned and can be used to initialise specific service
* for that worker as well as modify runtime environments in an async fashion.
* #param {String} cid capability id (e.g 0-0)
* #param {[type]} caps object containing capabilities for session that will be spawn in the worker
* #param {[type]} specs specs to be run in the worker process
* #param {[type]} args object that will be merged with the main configuration once worker is initialized
* #param {[type]} execArgv list of string arguments passed to the worker process
*/
// onWorkerStart: function (cid, caps, specs, args, execArgv) {
// },
/**
* Gets executed just after a worker process has exited.
* #param {String} cid capability id (e.g 0-0)
* #param {Number} exitCode 0 - success, 1 - fail
* #param {[type]} specs specs to be run in the worker process
* #param {Number} retries number of retries used
*/
// onWorkerEnd: function (cid, exitCode, specs, retries) {
// },
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {String} cid worker id (e.g. 0-0)
*/
// beforeSession: function (config, capabilities, specs, cid) {
// },
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {Object} browser instance of created browser/device session
*/
// before: function (capabilities, specs) {
// },
/**
* Runs before a WebdriverIO command gets executed.
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
*/
// beforeCommand: function (commandName, args) {
// },
/**
* Hook that gets executed before the suite starts
* #param {Object} suite suite details
*/
// beforeSuite: function (suite) {
// },
/**
* Function to be executed before a test (in Mocha/Jasmine) starts.
*/
// beforeTest: function (test, context) {
// },
/**
* Hook that gets executed _before_ a hook within the suite starts (e.g. runs before calling
* beforeEach in Mocha)
*/
// beforeHook: function (test, context) {
// },
/**
* Hook that gets executed _after_ a hook within the suite starts (e.g. runs after calling
* afterEach in Mocha)
*/
// afterHook: function (test, context, { error, result, duration, passed, retries }) {
// },
/**
* Function to be executed after a test (in Mocha/Jasmine only)
* #param {Object} test test object
* #param {Object} context scope object the test was executed with
* #param {Error} result.error error object in case the test fails, otherwise `undefined`
* #param {Any} result.result return object of test function
* #param {Number} result.duration duration of test
* #param {Boolean} result.passed true if test has passed, otherwise false
* #param {Object} result.retries informations to spec related retries, e.g. `{ attempts: 0, limit: 0 }`
*/
// afterTest: function(test, context, { error, result, duration, passed, retries }) {
// },
/**
* Hook that gets executed after the suite has ended
* #param {Object} suite suite details
*/
// afterSuite: function (suite) {
// },
/**
* Runs after a WebdriverIO command gets executed
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
* #param {Number} result 0 - command success, 1 - command error
* #param {Object} error error object if any
*/
// afterCommand: function (commandName, args, result, error) {
// },
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* #param {Number} result 0 - test pass, 1 - test fail
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// after: function (result, capabilities, specs) {
// },
/**
* Gets executed right after terminating the webdriver session.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// afterSession: function (config, capabilities, specs) {
// },
/**
* Gets executed after all workers got shut down and the process is about to exit. An error
* thrown in the onComplete hook will result in the test run failing.
* #param {Object} exitCode 0 - success, 1 - fail
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {<Object>} results object containing test results
*/
// onComplete: function(exitCode, config, capabilities, results) {
// },
/**
* Gets executed when a refresh happens.
* #param {String} oldSessionId session ID of the old session
* #param {String} newSessionId session ID of the new session
*/
// onReload: function(oldSessionId, newSessionId) {
// }
}

Related

Swagger ExpressJS return No operations defined in spec

Hello I'm trying to use swagger to my express server but return 'No operations defined in spec!' I already watch like 5 different videos but still has no luck how to fix this
Configurations looks correct only some adjustments in definition and options should work, Here are the code you should re-arrange,
You should write yaml double spaced instead of Tab,
/**
* #swagger
* /api/v1/friends:
* get:
* tags:
* - "Healthcheck"
* summary: summary of your api endpoint.
* description: description of your api endpoint.
* responses:
* 200:
* description: List of topics.
*/
Pass swagger options with these configurations do change values based on your application, validate file path if specified accurately, Most values are optional see docs for more details
const swaggerOptions = {
openapi: '3.0.0',
swaggerDefinition: {
info: {
title: 'Friends API',
version: '1.0.0',
description: 'Friends api endpoints',
servers: ['https://your-api-serverhosturl.com'],
},
produces: ['application/json'],
},
apis: ['../modules/friends/*.routes.js'],
};
const swaggerDocs = swaggerJsDoc(swaggerOptions);
app.use('/api/api-docs', swaggerUI.serve, swaggerUI.setup(swaggerDocs));
Navigate to http://yourhost:port/api/api-docs

how to stop selenium installation on "onPrepare" hook when using webdriver IO (wdio V7)

when i run my test using wdio it fails on "onPrepare" hook as it tries to install selenium server by throwing this error
2021-06-01T16:10:07.130Z INFO #wdio/cli:launcher: Run onPrepare hook
Error in "getDownloadStream". Could not download https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar
See more details below:
connect ETIMEDOUT 142.250.74.112:443
2021-06-01T16:11:22.829Z ERROR #wdio/cli:utils: A service failed in the 'onPrepare' hook
RequestError: connect ETIMEDOUT 142.250.74.112:443
at ClientRequest.<anonymous> (/Users/test/node_modules/got/dist/source/core/index.js:956:111)
at Object.onceWrapper (events.js:421:26)
at ClientRequest.emit (events.js:326:22)
at ClientRequest.EventEmitter.emit (domain.js:483:12)
at ClientRequest.origin.emit (/Users/test/node_modules/#szmarczak/http-timer/dist/source/index.js:39:20)
at TLSSocket.socketErrorListener (_http_client.js:427:9)
at TLSSocket.emit (events.js:314:20)
at TLSSocket.EventEmitter.emit (domain.js:483:12)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
the issue is i don't even want wdio to try and install selenium because i have already installed it. On WDIO v6 this use to work, if i installed selenium myself before running the test then i don't get this error.
so what i wanna know is is there a way to stop/skip selenium installation on onPrepare hook?
this is my config file WDIO
require('#babel/register');
require('#babel/polyfill');
const drivers = {
proxy: process.env.proxy,
baseURL: 'https://selenium-release.storage.googleapis.com',
version: '3.141.59',
ignoreExtraDrivers: true,
drivers: {
chrome: {
version: '88.0.4324.96',
arch: process.arch,
baseURL: 'https://chromedriver.storage.googleapis.com'
},
firefox: {
version: '0.25.0',
arch: process.arch,
baseURL: 'https://github.com/mozilla/geckodriver/releases/download'
}
}
};
exports.config = {
runner: 'local',
specs: ['./browser-tests/specs/**/*.spec.js'],
exclude: [
// 'path/to/excluded/files'
],
maxInstances: 10,
capabilities: [
{
maxInstances: 5,
//
browserName: 'firefox',
acceptInsecureCerts: true
}
],
logLevel: 'debug',
bail: 0,
baseUrl: 'http://localhost:3000',
waitforTimeout: 10000,
connectionRetryTimeout: 120000,
connectionRetryCount: 3,
skipSeleniumInstall: true,
services: [
[
'selenium-standalone',
{
args: { drivers } // drivers to use
}
]
],
framework: 'jasmine',
reporters: ['spec'],
};
I had file wdio.conf.js created automatically at the beginning of installing wdio.
exports.config = {
//
// ====================
// Runner Configuration
// ====================
//
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called.
//
// The specs are defined as an array of spec files (optionally using wildcards
// that will be expanded). The test for each spec file will be run in a separate
// worker process. In order to have a group of spec files run in the same worker
// process simply enclose them in an array within the specs array.
//
// If you are calling `wdio` from an NPM script (see https://docs.npmjs.com/cli/run-script),
// then the current working directory is where your `package.json` resides, so `wdio`
// will be called from there.
//
specs: [
'./test/specs/**/*.js'
],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude options in
// order to group specific specs to a specific capability.
//
// First, you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have
// set maxInstances to 1; wdio will spawn 3 processes. Therefore, if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property handles how many capabilities
// from the same test should run tests.
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://saucelabs.com/platform/platform-configurator
//
capabilities: [{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 5,
//
browserName: 'chrome',
acceptInsecureCerts: true
// If outputDir is provided WebdriverIO can capture driver session logs
// it is possible to configure which logTypes to include/exclude.
// excludeDriverLogs: ['*'], // pass '*' to exclude all driver session logs
// excludeDriverLogs: ['bugreport', 'server'],
}],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'info',
//
// Set specific log levels per logger
// loggers:
// - webdriver, webdriverio
// - #wdio/browserstack-service, #wdio/devtools-service, #wdio/sauce-service
// - #wdio/mocha-framework, #wdio/jasmine-framework
// - #wdio/local-runner
// - #wdio/sumologic-reporter
// - #wdio/cli, #wdio/config, #wdio/utils
// Level of logging verbosity: trace | debug | info | warn | error | silent
// logLevels: {
// webdriver: 'info',
// '#wdio/appium-service': 'info'
// },
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'http://localhost',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 120000,
//
// Default request retries count
connectionRetryCount: 3,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: ['selenium-standalone'],
// Framework you want to run your specs with.
// The following are supported: Mocha, Jasmine, and Cucumber
// see also: https://webdriver.io/docs/frameworks
//
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'mocha',
//
// The number of times to retry the entire specfile when it fails as a whole
// specFileRetries: 1,
//
// Delay in seconds between the spec file retry attempts
// specFileRetriesDelay: 0,
//
// Whether or not retried specfiles should be retried immediately or deferred to the end of the queue
// specFileRetriesDeferred: false,
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter
reporters: ['spec'],
//
// Options to be passed to Mocha.
// See the full list at http://mochajs.org/
mochaOpts: {
ui: 'bdd',
timeout: 60000
},
//
// =====
// Hooks
// =====
// WebdriverIO provides several hooks you can use to interfere with the test process in order to enhance
// it and to build services around it. You can either apply a single function or an array of
// methods to it. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
/**
* Gets executed once before all workers get launched.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
*/
// onPrepare: function (config, capabilities) {
// },
/**
* Gets executed before a worker process is spawned and can be used to initialise specific service
* for that worker as well as modify runtime environments in an async fashion.
* #param {String} cid capability id (e.g 0-0)
* #param {[type]} caps object containing capabilities for session that will be spawn in the worker
* #param {[type]} specs specs to be run in the worker process
* #param {[type]} args object that will be merged with the main configuration once worker is initialized
* #param {[type]} execArgv list of string arguments passed to the worker process
*/
// onWorkerStart: function (cid, caps, specs, args, execArgv) {
// },
/**
* Gets executed just after a worker process has exited.
* #param {String} cid capability id (e.g 0-0)
* #param {Number} exitCode 0 - success, 1 - fail
* #param {[type]} specs specs to be run in the worker process
* #param {Number} retries number of retries used
*/
// onWorkerEnd: function (cid, exitCode, specs, retries) {
// },
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {String} cid worker id (e.g. 0-0)
*/
// beforeSession: function (config, capabilities, specs, cid) {
// },
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {Object} browser instance of created browser/device session
*/
// before: function (capabilities, specs) {
// },
/**
* Runs before a WebdriverIO command gets executed.
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
*/
// beforeCommand: function (commandName, args) {
// },
/**
* Hook that gets executed before the suite starts
* #param {Object} suite suite details
*/
// beforeSuite: function (suite) {
// },
/**
* Function to be executed before a test (in Mocha/Jasmine) starts.
*/
// beforeTest: function (test, context) {
// },
/**
* Hook that gets executed _before_ a hook within the suite starts (e.g. runs before calling
* beforeEach in Mocha)
*/
// beforeHook: function (test, context) {
// },
/**
* Hook that gets executed _after_ a hook within the suite starts (e.g. runs after calling
* afterEach in Mocha)
*/
// afterHook: function (test, context, { error, result, duration, passed, retries }) {
// },
/**
* Function to be executed after a test (in Mocha/Jasmine only)
* #param {Object} test test object
* #param {Object} context scope object the test was executed with
* #param {Error} result.error error object in case the test fails, otherwise `undefined`
* #param {Any} result.result return object of test function
* #param {Number} result.duration duration of test
* #param {Boolean} result.passed true if test has passed, otherwise false
* #param {Object} result.retries informations to spec related retries, e.g. `{ attempts: 0, limit: 0 }`
*/
// afterTest: function(test, context, { error, result, duration, passed, retries }) {
// },
/**
* Hook that gets executed after the suite has ended
* #param {Object} suite suite details
*/
// afterSuite: function (suite) {
// },
/**
* Runs after a WebdriverIO command gets executed
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
* #param {Number} result 0 - command success, 1 - command error
* #param {Object} error error object if any
*/
// afterCommand: function (commandName, args, result, error) {
// },
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* #param {Number} result 0 - test pass, 1 - test fail
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// after: function (result, capabilities, specs) {
// },
/**
* Gets executed right after terminating the webdriver session.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// afterSession: function (config, capabilities, specs) {
// },
/**
* Gets executed after all workers got shut down and the process is about to exit. An error
* thrown in the onComplete hook will result in the test run failing.
* #param {Object} exitCode 0 - success, 1 - fail
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {<Object>} results object containing test results
*/
// onComplete: function(exitCode, config, capabilities, results) {
// },
/**
* Gets executed when a refresh happens.
* #param {String} oldSessionId session ID of the old session
* #param {String} newSessionId session ID of the new session
*/
// onReload: function(oldSessionId, newSessionId) {
// }
}
Commented out this line: "services: ['selenium-standalone']," and stopped getting this error: "INFO #wdio/cli:launcher: Run onPrepare hook"
Hope this would help somebody.

WebDriverIO: browser.pause does not working

I have used browser.sleep in protractor to hold the execution for a particular amount of time. In similar way I have tried the browser.pause in WebDriverIO. But it is not pausing for the given amount of time.
Even for browser pause I referred the WebDriverIO official documentation, there also the same thing is given
Step Definition Code:
Given(/^Verify the title of Salesforce web page$/,function(){
browser.url('https://login.salesforce.com/');
browser.pause(10000);
});
I use async mode in the configuration
WebDriverIO Version: ^5.22.4
wdio.config.js
exports.config = {
//
// ====================
// Runner Configuration
// ====================
//
// WebdriverIO allows it to run your tests in arbitrary locations (e.g. locally or
// on a remote machine).
runner: 'local',
//
// Override default path ('/wd/hub') for chromedriver service.
path: '/',
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called. Notice that, if you are calling `wdio` from an
// NPM script (see https://docs.npmjs.com/cli/run-script) then the current working
// directory is where your package.json resides, so `wdio` will be called from there.
//
specs: [
'./test/features/*.feature'
],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude options in
// order to group specific specs to a specific capability.
//
// First, you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have
// set maxInstances to 1; wdio will spawn 3 processes. Therefore, if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property handles how many capabilities
// from the same test should run tests.
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://docs.saucelabs.com/reference/platforms-configurator
//
capabilities: [{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 5,
//
browserName: 'chrome',
// If outputDir is provided WebdriverIO can capture driver session logs
// it is possible to configure which logTypes to include/exclude.
// excludeDriverLogs: ['*'], // pass '*' to exclude all driver session logs
// excludeDriverLogs: ['bugreport', 'server'],
}],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'info',
//
// Set specific log levels per logger
// loggers:
// - webdriver, webdriverio
// - #wdio/applitools-service, #wdio/browserstack-service, #wdio/devtools-service, #wdio/sauce-service
// - #wdio/mocha-framework, #wdio/jasmine-framework
// - #wdio/local-runner, #wdio/lambda-runner
// - #wdio/sumologic-reporter
// - #wdio/cli, #wdio/config, #wdio/sync, #wdio/utils
// Level of logging verbosity: trace | debug | info | warn | error | silent
// logLevels: {
// webdriver: 'info',
// '#wdio/applitools-service': 'info'
// },
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'https://login.salesforce.com/',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 90000,
//
// Default request retries count
connectionRetryCount: 0,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: ['chromedriver','firefox-profile'],
// Framework you want to run your specs with.
// The following are supported: Mocha, Jasmine, and Cucumber
// see also: https://webdriver.io/docs/frameworks.html
//
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'cucumber',
//
// The number of times to retry the entire specfile when it fails as a whole
// specFileRetries: 1,
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter.html
reporters: ['spec',['allure', {
outputDir: 'allure-results',
disableWebdriverStepsReporting: true,
disableWebdriverScreenshotsReporting: false,
useCucumberStepReporter:true
}]],
// If you are using Cucumber you need to specify the location of your step definitions.
cucumberOpts: {
require: ['./built/**/*.js'], // <string[]> (file/dir) require files before executing features
backtrace: false, // <boolean> show full backtrace for errors
requireModule: [], // <string[]> ("extension:module") require files with the given EXTENSION after requiring MODULE (repeatable)
dryRun: false, // <boolean> invoke formatters without executing steps
failFast: false, // <boolean> abort the run on first failure
format: ['pretty'], // <string[]> (type[:path]) specify the output format, optionally supply PATH to redirect formatter output (repeatable)
colors: true, // <boolean> disable colors in formatter output
snippets: true, // <boolean> hide step definition snippets for pending steps
source: true, // <boolean> hide source uris
profile: [], // <string[]> (name) specify the profile to use
strict: false, // <boolean> fail if there are any undefined or pending steps
tagExpression: '', // <string> (expression) only execute the features or scenarios with tags matching the expression
timeout: 60000, // <number> timeout for step definitions
ignoreUndefinedDefinitions: false, // <boolean> Enable this config to treat undefined definitions as warnings.
},
//
// =====
// Hooks
// =====
// WebdriverIO provides several hooks you can use to interfere with the test process in order to enhance
// it and to build services around it. You can either apply a single function or an array of
// methods to it. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
/**
* Gets executed once before all workers get launched.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
*/
// onPrepare: function (config, capabilities) {
// },
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
*/
// beforeSession: function (config, capabilities, specs) {
// },
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
*/
before: function (_capabilities) {
// =================
// Assertion Library
// =================
const chai = require('chai');
global.expect = chai.expect;
global.assert = chai.assert;
global.should = chai.should();
require('ts-node').register({ files: true });
},
/**
* Runs before a WebdriverIO command gets executed.
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
*/
// beforeCommand: function (commandName, args) {
// },
/**
* Runs before a Cucumber feature
*/
beforeFeature: function (uri, feature, scenarios) {
scenarioCounter = 0;
},
/**
* Runs before a Cucumber scenario
*/
// beforeScenario: function (uri, feature, scenario, sourceLocation) {
// },
/**
* Runs before a Cucumber step
*/
// beforeStep: function (uri, feature, stepData, context) {
// },
/**
* Runs after a Cucumber step
*/
afterStep: function (uri, feature, { error, result, duration, passed }, stepData, context) {
if (error !== undefined) {
browser.takeScreenshot();
}
},
/**
* Runs after a Cucumber scenario
*/
afterScenario: function (uri, feature, scenario, result, sourceLocation) {
scenarioCounter += 1;
addArgument('Scenario #', scenarioCounter);
},
/**
* Runs after a Cucumber feature
*/
// afterFeature: function (uri, feature, scenarios) {
// },
/**
* Runs after a WebdriverIO command gets executed
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
* #param {Number} result 0 - command success, 1 - command error
* #param {Object} error error object if any
*/
// afterCommand: function (commandName, args, result, error) {
// },
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* #param {Number} result 0 - test pass, 1 - test fail
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// after: function (result, capabilities, specs) {
// },
/**
* Gets executed right after terminating the webdriver session.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// afterSession: function (config, capabilities, specs) {
// },
/**
* Gets executed after all workers got shut down and the process is about to exit. An error
* thrown in the onComplete hook will result in the test run failing.
* #param {Object} exitCode 0 - success, 1 - fail
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {<Object>} results object containing test results
*/
// onComplete: function(exitCode, config, capabilities, results) {
// },
/**
* Gets executed when a refresh happens.
* #param {String} oldSessionId session ID of the old session
* #param {String} newSessionId session ID of the new session
*/
//onReload: function(oldSessionId, newSessionId) {
//}
}
Since, it was an async mode in WebDriverIO, have to add the await in both of the lines as below.
Given(/^Verify the title of Salesforce web page$/, async function(){
await browser.url('https://login.salesforce.com/');
await browser.pause(10000);
});
Might be worthy trying async/await as below:
Given(/^Verify the title of Salesforce web page$/, async function(){
await browser.url('https://login.salesforce.com/');
await browser.pause(10000);
});
I hope it helps
In my case someone had globally activated fake timers (jest.useFakeTimers()).
For now, in my End to end tests, I'm just switching back to real timers (jest.useRealTimers()).

Cakephp 3.7, Middleware, Authentication and Routing

I'm using Cakephp 3.7 and authentication middleware.
My app is hosted locally at http://192.168.33.10/scoring.
I'm using the following middleware method in my Application.php.
<?php
/**
* CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
* Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
*
* Licensed under The MIT License
* For full copyright and license information, please see the LICENSE.txt
* Redistributions of files must retain the above copyright notice.
*
* #copyright Copyright (c) Cake Software Foundation, Inc.
(https://cakefoundation.org)
* #link https://cakephp.org CakePHP(tm) Project
* #since 3.3.0
* #license https://opensource.org/licenses/mit-license.php MIT License
*/
namespace App;
use Authentication\AuthenticationService;
use Authentication\AuthenticationServiceProviderInterface;
use Authentication\Middleware\AuthenticationMiddleware;
use Cake\Core\Configure;
use Cake\Core\Exception\MissingPluginException;
use Cake\Error\Middleware\ErrorHandlerMiddleware;
use Cake\Http\BaseApplication;
use Cake\Routing\Middleware\AssetMiddleware;
use Cake\Routing\Middleware\RoutingMiddleware;
use Cake\Routing\Router;
use Psr\Http\Message\ResponseInterface;
use Psr\Http\Message\ServerRequestInterface;
/**
* Application setup class.
*
* This defines the bootstrapping logic and middleware layers you
* want to use in your application.
*/
class Application extends BaseApplication implements
AuthenticationServiceProviderInterface
{
/**
* {#inheritDoc}
*/
public function bootstrap()
{
$this->addPlugin('CakeDC/Enum');
$this->addPlugin('Muffin/Trash');
$this->addPlugin('AuditStash');
// Call parent to load bootstrap from files.
parent::bootstrap();
// include required plugins
$this->addPlugin('Authentication');
if (PHP_SAPI === 'cli') {
try {
$this->addPlugin('Bake');
} catch (MissingPluginException $e) {
// Do not halt if the plugin is missing
}
$this->addPlugin('Migrations');
}
/*
* Only try to load DebugKit in development mode
* Debug Kit should not be installed on a production system
*/
if (Configure::read('debug')) {
$this->addPlugin(\DebugKit\Plugin::class);
}
}
/**
* Returns a service provider instance.
*
* #param \Psr\Http\Message\ServerRequestInterface $request Request
* #param \Psr\Http\Message\ResponseInterface $response Response
* #return \Authentication\AuthenticationServiceInterface
*/
public function getAuthenticationService(ServerRequestInterface $request, ResponseInterface $response)
{
$service = new AuthenticationService();
$fields = [
'username' => 'email',
'password' => 'password'
];
// Load identifiers
//$service->loadIdentifier('Authentication.Password', compact('fields'));
$service->loadIdentifier('Development', compact('fields'));
// Load the authenticators, you want session first
$service->loadAuthenticator('Authentication.Session');
$service->loadAuthenticator('Authentication.Form', [
'fields' => $fields
]);
return $service;
}
/**
* Setup the middleware queue your application will use.
*
* #param \Cake\Http\MiddlewareQueue $middlewareQueue The middleware queue to setup.
* #return \Cake\Http\MiddlewareQueue The updated middleware queue.
*/
public function middleware($middlewareQueue)
{
// Add the authentication middleware
$authentication = new AuthenticationMiddleware($this, [
'unauthenticatedRedirect' => Router::url(['controller' => 'Users', 'action' => 'login']),
]);
$middlewareQueue
// Catch any exceptions in the lower layers,
// and make an error page/response
->add(new ErrorHandlerMiddleware(null, Configure::read('Error')))
// Handle plugin/theme assets like CakePHP normally does.
->add(new AssetMiddleware([
'cacheTime' => Configure::read('Asset.cacheTime')
]))
// Add routing middleware.
// Routes collection cache enabled by default, to disable route caching
// pass null as cacheConfig, example: `new RoutingMiddleware($this)`
// you might want to disable this cache in case your routing is extremely simple
->add(new RoutingMiddleware($this, '_cake_routes_'))
// Add the authentication middleware to the middleware queue
->add($authentication);
return $middlewareQueue;
}
}
I have the following in config/routes.php:
<?php
/**
* Routes configuration
*
* In this
file, you set up routes to your controllers and their actions.
* Routes are very important mechanism that allows you to freely connect
* different URLs to chosen controllers and their actions (functions).
*
* CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
* Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
*
* Licensed under The MIT License
* For full copyright and license information, please see the LICENSE.txt
* Redistributions of files must retain the above copyright notice.
*
* #copyright Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
* #link https://cakephp.org CakePHP(tm) Project
* #license https://opensource.org/licenses/mit-license.php MIT License
*/
use Cake\Http\Middleware\CsrfProtectionMiddleware;
use Cake\Routing\RouteBuilder;
use Cake\Routing\Router;
use Cake\Routing\Route\DashedRoute;
/**
* The default class to use for all routes
*
* The following route classes are supplied with CakePHP and are appropriate
* to set as the default:
*
* - Route
* - InflectedRoute
* - DashedRoute
*
* If no call is made to `Router::defaultRouteClass()`, the class used is
* `Route` (`Cake\Routing\Route\Route`)
*
* Note that `Route` does not do any inflections on URLs which will result in
* inconsistently cased URLs when used with `:plugin`, `:controller` and
* `:action` markers.
*
* Cache: Routes are cached to improve performance, check the RoutingMiddleware
* constructor in your `src/Application.php` file to change this behavior.
*
*/
Router::defaultRouteClass(DashedRoute::class);
Router::scope('/', function (RouteBuilder $routes) {
// Register scoped middleware for in scopes.
$routes->registerMiddleware('csrf', new CsrfProtectionMiddleware([
'httpOnly' => true
]));
/**
* Apply a middleware to the current route scope.
* Requires middleware to be registered via `Application::routes()` with `registerMiddleware()`
*/
$routes->applyMiddleware('csrf');
/**
* Here, we are connecting '/' (base path) to a controller called 'Pages',
* its action called 'display', and we pass a param to select the view file
* to use (in this case, src/Template/Pages/home.ctp)...
*/
$routes->connect('/', ['controller' => 'Pages', 'action' => 'display', 'home']);
//connect login route
$routes->connect('/login', ['controller' => 'Users', 'action' => 'login']);
//connect logout route
$routes->connect('/logout', ['controller' => 'Users', 'action' => 'logout']);
/**
* ...and connect the rest of 'Pages' controller's URLs.
*/
$routes->connect('/pages/*', ['controller' => 'Pages', 'action' => 'display']);
/**
* Connect catchall routes for all controllers.
*
* Using the argument `DashedRoute`, the `fallbacks` method is a shortcut for
*
* ```
* $routes->connect('/:controller', ['action' => 'index'], ['routeClass' => 'DashedRoute']);
* $routes->connect('/:controller/:action/*', [], ['routeClass' => 'DashedRoute']);
* ```
*
* Any route class can be used with this method, such as:
* - DashedRoute
* - InflectedRoute
* - Route
* - Or your own route class
*
* You can remove these routes once you've connected the
* routes you want in your application.
*/
$routes->fallbacks(DashedRoute::class);
});
/**
* If you need a different set of middleware or none at all,
* open new scope and define routes there.
*
* ```
* Router::scope('/api', function (RouteBuilder $routes) {
* // No $routes->applyMiddleware() here.
* // Connect API actions here.
* });
* ```
*/
Router::prefix('admin', function ($routes) {
// All routes here will be prefixed with `/admin`
// And have the prefix => admin route element added.
$routes->fallbacks(DashedRoute::class);
});
The issue I'm having is that the redirect goes to http://192.168.33.10/login rather than going to http://192.168.33.10/scoring/login.
In troubleshooting my issue, I've discovered that the Router::url method will return /login if run in Application.php, but will return /scoring/login if run from AppController.php.
Obviously there's something I'm not seeing that's crossing up between the Routing middleware and the authentication middleware. I'm fairly new to the latest version of Cakephp and the integration of middleware, so I'm sure I've made an error somewhere.
Can someone help identify my error?

JSON Store is not creating in Android for latest version fix pack 7.1.0.00.20160919-1656

Below sample code is not working in latest fix on Android but if we remove the password field from options then it's working fine. We are getting below error on Android but it's working fine on IOS
{"src":"initCollection","err":-3,"msg":"INVALID_KEY_ON_PROVISION","col":"people","usr":"test","doc":{},"res":{}}
function wlCommonInit(){
/*
* Use of WL.Client.connect() API before any connectivity to a MobileFirst Server is required.
* This API should be called only once, before any other WL.Client methods that communicate with the MobileFirst Server.
* Don't forget to specify and implement onSuccess and onFailure callback functions for WL.Client.connect(), e.g:
*
* WL.Client.connect({
* onSuccess: onConnectSuccess,
* onFailure: onConnectFailure
* });
*
*/
// Common initialization code goes here
}
function onClick(){
alert("Click");
var collectionName = 'people';
// Object that defines all the collections.
var collections = {
// Object that defines the 'people' collection.
people : {
// Object that defines the Search Fields for the 'people' collection.
searchFields : {name: 'string', age: 'integer'}
}
};
// Optional options object.
var options = {
username:"test",
// Optional password, default no passw`enter code here`ord.
password : '123',
};
WL.JSONStore.init(collections, options)
.then(function () {
alert("Success in jstore");
})
.fail(function (errorObject) {
// Handle failure for any of the previous JSONStore operations (init, add).
alert("Failure in jstore : "+ JSON.stringify(errorObject));
});
};
Update: The iFix is now released. Build number is 7.1.0.0-IF201610060540 .
This is a known issue with the latest available iFix. It has been recently fixed and should be available soon.
Keep an eye out for a newer iFix release in the IBM Fix Central website for a fix for this issue.