Kube-monkey interval (Chaos Testing) - testing

I'm implementing Kube-monkey in my dev Kubernetes cluster, and can see that the PODs are getting terminated every 30 seconds.
Could someone please help me to set POD termination interval in Kube-monkey(Chaos Monkey for Kubernetes clusters) to some other time?
I tried setting interval parameter in the kube-monkey yaml file (as below) to terminate the PODs every 5 minutes, but doesn't work.
config:
dryRun: false
whitelistedNamespaces:
- "default"
debug:
enabled: true
interval: 5m0s
schedule_immediate_kill: true```
Couldn't find any resources online on setting the termination interval as well.
Could someone please guide me on how to set this?
Thanks a lot!

If you see the official helm chart values.yaml there is no interval key. since you have kept schedule_immediate_kill : TRUE default kill time will be 30S
https://github.com/asobti/kube-monkey/blob/master/helm/kubemonkey/values.yaml
you can change the interval time using schedule_delay
[debug]
enabled = true
schedule_delay=30
force_should_kill = true
schedule_immediate_kill = true
instead of interval use the schedule_delay.
however the interval is working with the Kubethanos : https://jaxenter.com/kubernetes-chaos-kubethanos-164798.html
you can pass key-value as a parameter.

The params file of golang will show us what all configurations we can alter have a look: https://raw.githubusercontent.com/asobti/kube-monkey/master/config/param/param.go
package param
const (
// DryRun logs but does not terminate pods
// Type: bool
// Default: true
DryRun = "kubemonkey.dry_run"
// Timezone specifies the timezone to use when
// scheduling Pod terminations
// Type: string
// Default: America/Los_Angeles
Timezone = "kubemonkey.time_zone"
// RunHour specifies the hour of the weekday
// when the scheduler should run to schedule terminations
// Must be less than StartHour, and [0,23]
// Type: int
// Default: 8
RunHour = "kubemonkey.run_hour"
// StartHour specifies the hour beginning at
// which pod terminations may occur
// Should be set to a time when service owners are expected
// to be available
// Must be less than EndHour, and [0, 23]
// Type: int
// Default: 10
StartHour = "kubemonkey.start_hour"
// EndHour specifies the end hour beyond which no pod
// terminations will occur
// Should be set to a time when service owners are
// expected to be available
// Must be [0,23]
// Type: int
// Default: 16
EndHour = "kubemonkey.end_hour"
// GracePeriodSec specifies the amount of time in
// seconds a pod is given to shut down gracefully,
// before Kubernetes does a hard kill
// Type: int
// Default: 5
GracePeriodSec = "kubemonkey.graceperiod_sec"
// WhitelistedNamespaces specifies a list of
// namespaces where terminations are valid
// Default is defined by metav1.NamespaceDefault
// To allow all namespaces use [""]
// Type: list
// Default: [ "default" ]
WhitelistedNamespaces = "kubemonkey.whitelisted_namespaces"
// BlacklistedNamespaces specifies a list of namespaces
// for which terminations should never
// be carried out.
// Default is defined by metav1.NamespaceSystem
// To block no namespaces use [""]
// Type: list
// Default: [ "kube-system" ]
BlacklistedNamespaces = "kubemonkey.blacklisted_namespaces"
// ClusterAPIServerHost specifies the host URL for Kubernetes
// cluster APIServer. Use this config if the apiserver IP
// address provided by in-cluster config
// does not work for you because your certificate does not
// contain the right SAN
// Type: string
// Default: No default. If not specified, URL provided
// by in-cluster config is used
ClusterAPIServerHost = "kubernetes.host"
// DebugEnabled enables debug mode
// Type: bool
// Default: false
DebugEnabled = "debug.enabled"
// DebugScheduleDelay delays duration
// in sec after kube-monkey is launched
// after which scheduling is run
// Use when debugging to run scheduling sooner
// Type: int
// Default: 30
DebugScheduleDelay = "debug.schedule_delay"
// DebugForceShouldKill guarantees terminations
// to be scheduled for all eligible Deployments,
// i.e., probability of kill = 1
// Type: bool
// Default: false
DebugForceShouldKill = "debug.force_should_kill"
// DebugScheduleImmediateKill schedules pod terminations
// sometime in the next 60 sec to facilitate
// debugging (instead of the hours specified by
// StartHour and EndHour)
// Type: bool
// Default: false
DebugScheduleImmediateKill = "debug.schedule_immediate_kill"
// NotificationsEnabled enables reporting of attacks to an HTTP endpoint
// Type: bool
// Default: false
NotificationsEnabled = "notifications.enabled"
// NotificationsReportSchedule enables reporting of attack schedule to an HTTP endpoint
// Type: bool
// Default: false
NotificationsReportSchedule = "notifications.reportSchedule"
// NotificationsAttacks reports attacks to an HTTP endpoint
// Type: config.Receiver struct
// Default: Receiver{}
NotificationsAttacks = "notifications.attacks"
)

Related

In webdriver-io, how to separate specs from source

I'm using webdriver-io + browserstack (TS/JS project), and I'd like to run my tests against the bundled production output.
Settings specs to the bundled output:
specs: ['dist/test/index.js'],
If I create two separate js files, one for the source and one for the tests, the test is skipped:
INFO #wdio/cli: [0-0] SKIPPED in chrome - /dist/test/index.js
If I bundle everything together, it works.
Pouring over the documentation I can't seem to find the correct way to include source files.
When I use the jasmine browser runner, I can separate the two via srcFiles
{
srcDir: 'dist',
srcFiles: ['index.umd.js'],
specDir: 'dist/test',
specFiles: ['index.js'],
browser: browser,
}
My full wdio.conf.ts:
import type { Options } from '#wdio/types'
import { config as dotEnvConfig } from 'dotenv'
dotEnvConfig()
if (!process.env.BROWSERSTACK_USERNAME || !process.env.BROWSERSTACK_ACCESS_KEY)
throw new Error(
'missing .env BROWSERSTACK_USERNAME or BROWSERSTACK_ACCESS_KEY'
)
export const config: Options.Testrunner = {
//
// ====================
// Runner Configuration
// ====================
//
//
// =====================
// ts-node Configurations
// =====================
//
// You can write tests using TypeScript to get autocompletion and type safety.
// You will need typescript and ts-node installed as devDependencies.
// WebdriverIO will automatically detect if these dependencies are installed
// and will compile your config and tests for you.
// If you need to configure how ts-node runs please use the
// environment variables for ts-node or use wdio config's autoCompileOpts section.
//
user: process.env.BROWSERSTACK_USERNAME,
key: process.env.BROWSERSTACK_ACCESS_KEY,
autoCompileOpts: {
autoCompile: true,
// see https://github.com/TypeStrong/ts-node#cli-and-programmatic-options
// for all available options
tsNodeOpts: {
transpileOnly: true,
project: 'tsconfig.json',
require: ['tsconfig-paths/register'],
},
// tsconfig-paths is only used if "tsConfigPathsOpts" are provided, if you
// do please make sure "tsconfig-paths" is installed as dependency
// tsConfigPathsOpts: {
// baseUrl: './',
// paths: {
// },
// },
},
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called.
//
// The specs are defined as an array of spec files (optionally using wildcards
// that will be expanded). The test for each spec file will be run in a separate
// worker process. In order to have a group of spec files run in the same worker
// process simply enclose them in an array within the specs array.
//
// If you are calling `wdio` from an NPM script (see https://docs.npmjs.com/cli/run-script),
// then the current working directory is where your `package.json` resides, so `wdio`
// will be called from there.
//
/**
* Running against the bundled specs is an order of magnitude faster than running against
* all ts source files.
*/
specs: ['dist/test/index.js'],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude options in
// order to group specific specs to a specific capability.
//
// First, you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have
// set maxInstances to 1; wdio will spawn 3 processes. Therefore, if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property handles how many capabilities
// from the same test should run tests.
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://saucelabs.com/platform/platform-configurator
//
capabilities: [
{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 5,
//
browserName: 'chrome',
acceptInsecureCerts: true,
// If outputDir is provided WebdriverIO can capture driver session logs
// it is possible to configure which logTypes to include/exclude.
// excludeDriverLogs: ['*'], // pass '*' to exclude all driver session logs
// excludeDriverLogs: ['bugreport', 'server'],
},
],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'info',
//
// Set specific log levels per logger
// loggers:
// - webdriver, webdriverio
// - #wdio/browserstack-service, #wdio/devtools-service, #wdio/sauce-service
// - #wdio/mocha-framework, #wdio/jasmine-framework
// - #wdio/local-runner
// - #wdio/sumologic-reporter
// - #wdio/cli, #wdio/config, #wdio/utils
// Level of logging verbosity: trace | debug | info | warn | error | silent
// logLevels: {
// webdriver: 'trace',
// webdriverio: 'trace',
// '#wdio/local-runner': 'trace',
// '#wdio/cli': 'trace',
// '#wdio/browserstack-service': 'debug',
// },
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'http://localhost',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 120000,
//
// Default request retries count
connectionRetryCount: 3,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: ['browserstack'],
// Framework you want to run your specs with.
// The following are supported: Mocha, Jasmine, and Cucumber
// see also: https://webdriver.io/docs/frameworks
//
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'jasmine',
//
// The number of times to retry the entire specfile when it fails as a whole
// specFileRetries: 1,
//
// Delay in seconds between the spec file retry attempts
// specFileRetriesDelay: 0,
//
// Whether or not retried specfiles should be retried immediately or deferred to the end of the queue
// specFileRetriesDeferred: false,
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter
reporters: ['spec'],
//
// Options to be passed to Jasmine.
jasmineOpts: {
// Jasmine default timeout
defaultTimeoutInterval: 60000,
//
// The Jasmine framework allows interception of each assertion in order to log the state of the application
// or website depending on the result. For example, it is pretty handy to take a screenshot every time
// an assertion fails.
expectationResultHandler: function (passed, assertion) {
// do something
},
helpers: [
'test/unit/_setup/jsdom.ts',
'test/unit/_setup/consoleReporter.ts',
],
},
//
// =====
// Hooks
// =====
// WebdriverIO provides several hooks you can use to interfere with the test process in order to enhance
// it and to build services around it. You can either apply a single function or an array of
// methods to it. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
/**
* Gets executed once before all workers get launched.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
*/
// onPrepare: function (config, capabilities) {
// },
/**
* Gets executed before a worker process is spawned and can be used to initialise specific service
* for that worker as well as modify runtime environments in an async fashion.
* #param {String} cid capability id (e.g 0-0)
* #param {[type]} caps object containing capabilities for session that will be spawn in the worker
* #param {[type]} specs specs to be run in the worker process
* #param {[type]} args object that will be merged with the main configuration once worker is initialized
* #param {[type]} execArgv list of string arguments passed to the worker process
*/
// onWorkerStart: function (cid, caps, specs, args, execArgv) {
// },
/**
* Gets executed just after a worker process has exited.
* #param {String} cid capability id (e.g 0-0)
* #param {Number} exitCode 0 - success, 1 - fail
* #param {[type]} specs specs to be run in the worker process
* #param {Number} retries number of retries used
*/
// onWorkerEnd: function (cid, exitCode, specs, retries) {
// },
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {String} cid worker id (e.g. 0-0)
*/
// beforeSession: function (config, capabilities, specs, cid) {
// },
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {Object} browser instance of created browser/device session
*/
// before: function (capabilities, specs) {
// },
/**
* Runs before a WebdriverIO command gets executed.
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
*/
// beforeCommand: function (commandName, args) {
// },
/**
* Hook that gets executed before the suite starts
* #param {Object} suite suite details
*/
// beforeSuite: function (suite) {
// },
/**
* Function to be executed before a test (in Mocha/Jasmine) starts.
*/
// beforeTest: function (test, context) {
// },
/**
* Hook that gets executed _before_ a hook within the suite starts (e.g. runs before calling
* beforeEach in Mocha)
*/
// beforeHook: function (test, context) {
// },
/**
* Hook that gets executed _after_ a hook within the suite starts (e.g. runs after calling
* afterEach in Mocha)
*/
// afterHook: function (test, context, { error, result, duration, passed, retries }) {
// },
/**
* Function to be executed after a test (in Mocha/Jasmine only)
* #param {Object} test test object
* #param {Object} context scope object the test was executed with
* #param {Error} result.error error object in case the test fails, otherwise `undefined`
* #param {Any} result.result return object of test function
* #param {Number} result.duration duration of test
* #param {Boolean} result.passed true if test has passed, otherwise false
* #param {Object} result.retries informations to spec related retries, e.g. `{ attempts: 0, limit: 0 }`
*/
// afterTest: function(test, context, { error, result, duration, passed, retries }) {
// },
/**
* Hook that gets executed after the suite has ended
* #param {Object} suite suite details
*/
// afterSuite: function (suite) {
// },
/**
* Runs after a WebdriverIO command gets executed
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
* #param {Number} result 0 - command success, 1 - command error
* #param {Object} error error object if any
*/
// afterCommand: function (commandName, args, result, error) {
// },
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* #param {Number} result 0 - test pass, 1 - test fail
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// after: function (result, capabilities, specs) {
// },
/**
* Gets executed right after terminating the webdriver session.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// afterSession: function (config, capabilities, specs) {
// },
/**
* Gets executed after all workers got shut down and the process is about to exit. An error
* thrown in the onComplete hook will result in the test run failing.
* #param {Object} exitCode 0 - success, 1 - fail
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {<Object>} results object containing test results
*/
// onComplete: function(exitCode, config, capabilities, results) {
// },
/**
* Gets executed when a refresh happens.
* #param {String} oldSessionId session ID of the old session
* #param {String} newSessionId session ID of the new session
*/
// onReload: function(oldSessionId, newSessionId) {
// }
}

how to stop selenium installation on "onPrepare" hook when using webdriver IO (wdio V7)

when i run my test using wdio it fails on "onPrepare" hook as it tries to install selenium server by throwing this error
2021-06-01T16:10:07.130Z INFO #wdio/cli:launcher: Run onPrepare hook
Error in "getDownloadStream". Could not download https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar
See more details below:
connect ETIMEDOUT 142.250.74.112:443
2021-06-01T16:11:22.829Z ERROR #wdio/cli:utils: A service failed in the 'onPrepare' hook
RequestError: connect ETIMEDOUT 142.250.74.112:443
at ClientRequest.<anonymous> (/Users/test/node_modules/got/dist/source/core/index.js:956:111)
at Object.onceWrapper (events.js:421:26)
at ClientRequest.emit (events.js:326:22)
at ClientRequest.EventEmitter.emit (domain.js:483:12)
at ClientRequest.origin.emit (/Users/test/node_modules/#szmarczak/http-timer/dist/source/index.js:39:20)
at TLSSocket.socketErrorListener (_http_client.js:427:9)
at TLSSocket.emit (events.js:314:20)
at TLSSocket.EventEmitter.emit (domain.js:483:12)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
the issue is i don't even want wdio to try and install selenium because i have already installed it. On WDIO v6 this use to work, if i installed selenium myself before running the test then i don't get this error.
so what i wanna know is is there a way to stop/skip selenium installation on onPrepare hook?
this is my config file WDIO
require('#babel/register');
require('#babel/polyfill');
const drivers = {
proxy: process.env.proxy,
baseURL: 'https://selenium-release.storage.googleapis.com',
version: '3.141.59',
ignoreExtraDrivers: true,
drivers: {
chrome: {
version: '88.0.4324.96',
arch: process.arch,
baseURL: 'https://chromedriver.storage.googleapis.com'
},
firefox: {
version: '0.25.0',
arch: process.arch,
baseURL: 'https://github.com/mozilla/geckodriver/releases/download'
}
}
};
exports.config = {
runner: 'local',
specs: ['./browser-tests/specs/**/*.spec.js'],
exclude: [
// 'path/to/excluded/files'
],
maxInstances: 10,
capabilities: [
{
maxInstances: 5,
//
browserName: 'firefox',
acceptInsecureCerts: true
}
],
logLevel: 'debug',
bail: 0,
baseUrl: 'http://localhost:3000',
waitforTimeout: 10000,
connectionRetryTimeout: 120000,
connectionRetryCount: 3,
skipSeleniumInstall: true,
services: [
[
'selenium-standalone',
{
args: { drivers } // drivers to use
}
]
],
framework: 'jasmine',
reporters: ['spec'],
};
I had file wdio.conf.js created automatically at the beginning of installing wdio.
exports.config = {
//
// ====================
// Runner Configuration
// ====================
//
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called.
//
// The specs are defined as an array of spec files (optionally using wildcards
// that will be expanded). The test for each spec file will be run in a separate
// worker process. In order to have a group of spec files run in the same worker
// process simply enclose them in an array within the specs array.
//
// If you are calling `wdio` from an NPM script (see https://docs.npmjs.com/cli/run-script),
// then the current working directory is where your `package.json` resides, so `wdio`
// will be called from there.
//
specs: [
'./test/specs/**/*.js'
],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude options in
// order to group specific specs to a specific capability.
//
// First, you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have
// set maxInstances to 1; wdio will spawn 3 processes. Therefore, if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property handles how many capabilities
// from the same test should run tests.
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://saucelabs.com/platform/platform-configurator
//
capabilities: [{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 5,
//
browserName: 'chrome',
acceptInsecureCerts: true
// If outputDir is provided WebdriverIO can capture driver session logs
// it is possible to configure which logTypes to include/exclude.
// excludeDriverLogs: ['*'], // pass '*' to exclude all driver session logs
// excludeDriverLogs: ['bugreport', 'server'],
}],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'info',
//
// Set specific log levels per logger
// loggers:
// - webdriver, webdriverio
// - #wdio/browserstack-service, #wdio/devtools-service, #wdio/sauce-service
// - #wdio/mocha-framework, #wdio/jasmine-framework
// - #wdio/local-runner
// - #wdio/sumologic-reporter
// - #wdio/cli, #wdio/config, #wdio/utils
// Level of logging verbosity: trace | debug | info | warn | error | silent
// logLevels: {
// webdriver: 'info',
// '#wdio/appium-service': 'info'
// },
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'http://localhost',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 120000,
//
// Default request retries count
connectionRetryCount: 3,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: ['selenium-standalone'],
// Framework you want to run your specs with.
// The following are supported: Mocha, Jasmine, and Cucumber
// see also: https://webdriver.io/docs/frameworks
//
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'mocha',
//
// The number of times to retry the entire specfile when it fails as a whole
// specFileRetries: 1,
//
// Delay in seconds between the spec file retry attempts
// specFileRetriesDelay: 0,
//
// Whether or not retried specfiles should be retried immediately or deferred to the end of the queue
// specFileRetriesDeferred: false,
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter
reporters: ['spec'],
//
// Options to be passed to Mocha.
// See the full list at http://mochajs.org/
mochaOpts: {
ui: 'bdd',
timeout: 60000
},
//
// =====
// Hooks
// =====
// WebdriverIO provides several hooks you can use to interfere with the test process in order to enhance
// it and to build services around it. You can either apply a single function or an array of
// methods to it. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
/**
* Gets executed once before all workers get launched.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
*/
// onPrepare: function (config, capabilities) {
// },
/**
* Gets executed before a worker process is spawned and can be used to initialise specific service
* for that worker as well as modify runtime environments in an async fashion.
* #param {String} cid capability id (e.g 0-0)
* #param {[type]} caps object containing capabilities for session that will be spawn in the worker
* #param {[type]} specs specs to be run in the worker process
* #param {[type]} args object that will be merged with the main configuration once worker is initialized
* #param {[type]} execArgv list of string arguments passed to the worker process
*/
// onWorkerStart: function (cid, caps, specs, args, execArgv) {
// },
/**
* Gets executed just after a worker process has exited.
* #param {String} cid capability id (e.g 0-0)
* #param {Number} exitCode 0 - success, 1 - fail
* #param {[type]} specs specs to be run in the worker process
* #param {Number} retries number of retries used
*/
// onWorkerEnd: function (cid, exitCode, specs, retries) {
// },
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {String} cid worker id (e.g. 0-0)
*/
// beforeSession: function (config, capabilities, specs, cid) {
// },
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
* #param {Object} browser instance of created browser/device session
*/
// before: function (capabilities, specs) {
// },
/**
* Runs before a WebdriverIO command gets executed.
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
*/
// beforeCommand: function (commandName, args) {
// },
/**
* Hook that gets executed before the suite starts
* #param {Object} suite suite details
*/
// beforeSuite: function (suite) {
// },
/**
* Function to be executed before a test (in Mocha/Jasmine) starts.
*/
// beforeTest: function (test, context) {
// },
/**
* Hook that gets executed _before_ a hook within the suite starts (e.g. runs before calling
* beforeEach in Mocha)
*/
// beforeHook: function (test, context) {
// },
/**
* Hook that gets executed _after_ a hook within the suite starts (e.g. runs after calling
* afterEach in Mocha)
*/
// afterHook: function (test, context, { error, result, duration, passed, retries }) {
// },
/**
* Function to be executed after a test (in Mocha/Jasmine only)
* #param {Object} test test object
* #param {Object} context scope object the test was executed with
* #param {Error} result.error error object in case the test fails, otherwise `undefined`
* #param {Any} result.result return object of test function
* #param {Number} result.duration duration of test
* #param {Boolean} result.passed true if test has passed, otherwise false
* #param {Object} result.retries informations to spec related retries, e.g. `{ attempts: 0, limit: 0 }`
*/
// afterTest: function(test, context, { error, result, duration, passed, retries }) {
// },
/**
* Hook that gets executed after the suite has ended
* #param {Object} suite suite details
*/
// afterSuite: function (suite) {
// },
/**
* Runs after a WebdriverIO command gets executed
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
* #param {Number} result 0 - command success, 1 - command error
* #param {Object} error error object if any
*/
// afterCommand: function (commandName, args, result, error) {
// },
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* #param {Number} result 0 - test pass, 1 - test fail
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// after: function (result, capabilities, specs) {
// },
/**
* Gets executed right after terminating the webdriver session.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// afterSession: function (config, capabilities, specs) {
// },
/**
* Gets executed after all workers got shut down and the process is about to exit. An error
* thrown in the onComplete hook will result in the test run failing.
* #param {Object} exitCode 0 - success, 1 - fail
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {<Object>} results object containing test results
*/
// onComplete: function(exitCode, config, capabilities, results) {
// },
/**
* Gets executed when a refresh happens.
* #param {String} oldSessionId session ID of the old session
* #param {String} newSessionId session ID of the new session
*/
// onReload: function(oldSessionId, newSessionId) {
// }
}
Commented out this line: "services: ['selenium-standalone']," and stopped getting this error: "INFO #wdio/cli:launcher: Run onPrepare hook"
Hope this would help somebody.

What are cloudflare KV preview_ids and how to get one?

I have a following wrangler.toml. When I would like to use dev or preview (e.g. npx wrangler dev or npx wrangler preview) wrangler asks to add a preview_id to the KV namespaces. Is this an identifier to an existing KV namespace?
I see there is a ticket in Cloudflare Workers GH at https://github.com/cloudflare/wrangler/issues/1458 that tells this ought to be clarified but the ticket is closed with adding an error message
In order to preview a worker with KV namespaces, you must designate a preview_id in your configuration file for each KV namespace you'd like to preview."
which is the reason I'm here. :)
As for larger context I would be really glad if someone could clarify: I see that if I give a value of an existing namespace, I can preview and I see a KV namespace of type __some-worker-dev-1234-workers_sites_assets_preview is generated in Cloudflare. This has a different identifier than the KV namespace pointed by the identifier used in the preview_id and the KV namespace pointed by the identifier I used in preview_id is empty. Why does giving an identifier of an existing KV namespace remove the error message, deploys the assets and allow for previwing but the actual KV namespace is empty and a new one is created?
How do does kv-asset-handler know to look into this generated namespace to retrieve the assets?
I'm currently testing with the default generated Cloudare Worker to my site and I wonder if I have misunderstood something or if there is some mechanics that bundles during preview/publish the site namespace to the scipt.
If there is some random mechanics with automatic mapping, can this be then so that every developer can have their own private preview KV namespace?
type = "javascript"
name = "some-worker-dev-1234"
account_id = "<id>"
workers_dev = true
kv_namespaces = [
{ binding = "site_data", id = "<test-site-id>" }
]
[site]
# The location for the site.
bucket = "./dist"
# The entry directory for the package.json that contains
# main field for the file name of the compiled worker file in "main" field.
entry-point = ""
[env.production]
name = "some-worker-1234"
zone_id = "<zone-id>"
routes = [
"http://<site>/*",
"https://www.<site>/*"
]
# kv_namespaces = [
# { binding = "site_data", id = "<production-site-id>" }
# ]
import { getAssetFromKV, mapRequestToAsset } from '#cloudflare/kv-asset-handler'
/**
* The DEBUG flag will do two things that help during development:
* 1. we will skip caching on the edge, which makes it easier to
* debug.
* 2. we will return an error message on exception in your Response rather
* than the default 404.html page.
*/
const DEBUG = false
addEventListener('fetch', event => {
try {
event.respondWith(handleEvent(event))
} catch (e) {
if (DEBUG) {
return event.respondWith(
new Response(e.message || e.toString(), {
status: 500,
}),
)
}
event.respondWith(new Response('Internal Error', { status: 500 }))
}
})
async function handleEvent(event) {
const url = new URL(event.request.url)
let options = {}
/**
* You can add custom logic to how we fetch your assets
* by configuring the function `mapRequestToAsset`
*/
// options.mapRequestToAsset = handlePrefix(/^\/docs/)
try {
if (DEBUG) {
// customize caching
options.cacheControl = {
bypassCache: true,
}
}
const page = await getAssetFromKV(event, options)
// allow headers to be altered
const response = new Response(page.body, page)
response.headers.set('X-XSS-Protection', '1; mode=block')
response.headers.set('X-Content-Type-Options', 'nosniff')
response.headers.set('X-Frame-Options', 'DENY')
response.headers.set('Referrer-Policy', 'unsafe-url')
response.headers.set('Feature-Policy', 'none')
return response
} catch (e) {
// if an error is thrown try to serve the asset at 404.html
if (!DEBUG) {
try {
let notFoundResponse = await getAssetFromKV(event, {
mapRequestToAsset: req => new Request(`${new URL(req.url).origin}/404.html`, req),
})
return new Response(notFoundResponse.body, { ...notFoundResponse, status: 404 })
} catch (e) {}
}
return new Response(e.message || e.toString(), { status: 500 })
}
}
/**
* Here's one example of how to modify a request to
* remove a specific prefix, in this case `/docs` from
* the url. This can be useful if you are deploying to a
* route on a zone, or if you only want your static content
* to exist at a specific path.
*/
function handlePrefix(prefix) {
return request => {
// compute the default (e.g. / -> index.html)
let defaultAssetKey = mapRequestToAsset(request)
let url = new URL(defaultAssetKey.url)
// strip the prefix from the path for lookup
url.pathname = url.pathname.replace(prefix, '/')
// inherit all other props from the default request
return new Request(url.toString(), defaultAssetKey)
}
}
In case the format is not obvious (it wasn't to me) here is a sample config block from the docs with the preview_id specified for a couple of KV Namespaces:
kv_namespaces = [
{ binding = "FOO", id = "0f2ac74b498b48028cb68387c421e279", preview_id = "6a1ddb03f3ec250963f0a1e46820076f" },
{ binding = "BAR", id = "068c101e168d03c65bddf4ba75150fb0", preview_id = "fb69528dbc7336525313f2e8c3b17db0" }
]
You can generate a new namespace ID in the Workers KV section of the dashboard or with the Wrangler CLI:
wrangler kv:namespace create "SOME_NAMESPACE" --preview
This answer applies to versions of Wrangler >= 1.10.0
wrangler asks to add a preview_id to the KV namespaces. Is this an identifier to an existing KV namespace?
Yes! The reason there is a different identifier for preview namespaces is so that when developing with wrangler dev or wrangler preview you don't accidentally write changes to your existing production data with possibly buggy or incompatible code. You can add a --preview flag to most wrangler kv commands to interact with your preview namespaces.
For your situation here there are actually a few things going on.
You are using Workers Sites
You have a KV namespace defined in wrangler.toml
Workers Sites will automatically configure a production namespace for each environment you run wrangler publish on, and a preview namespace for each environment you run wrangler dev or wrangler preview on. If all you need is Workers Sites, then there is no need at all to specify a kv-namepsaces table in your manifest. That table is for additional KV namespaces that you may want to read data from or write data to. If that is what you need, you'll need to configure your own namespace and add id to wrangler.toml if you want to use wrangler publish, and preview_id (which should be different) if you want to use wrangler dev or wrangler preview.

WebDriverIO: browser.pause does not working

I have used browser.sleep in protractor to hold the execution for a particular amount of time. In similar way I have tried the browser.pause in WebDriverIO. But it is not pausing for the given amount of time.
Even for browser pause I referred the WebDriverIO official documentation, there also the same thing is given
Step Definition Code:
Given(/^Verify the title of Salesforce web page$/,function(){
browser.url('https://login.salesforce.com/');
browser.pause(10000);
});
I use async mode in the configuration
WebDriverIO Version: ^5.22.4
wdio.config.js
exports.config = {
//
// ====================
// Runner Configuration
// ====================
//
// WebdriverIO allows it to run your tests in arbitrary locations (e.g. locally or
// on a remote machine).
runner: 'local',
//
// Override default path ('/wd/hub') for chromedriver service.
path: '/',
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called. Notice that, if you are calling `wdio` from an
// NPM script (see https://docs.npmjs.com/cli/run-script) then the current working
// directory is where your package.json resides, so `wdio` will be called from there.
//
specs: [
'./test/features/*.feature'
],
// Patterns to exclude.
exclude: [
// 'path/to/excluded/files'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude options in
// order to group specific specs to a specific capability.
//
// First, you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have
// set maxInstances to 1; wdio will spawn 3 processes. Therefore, if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property handles how many capabilities
// from the same test should run tests.
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://docs.saucelabs.com/reference/platforms-configurator
//
capabilities: [{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 5,
//
browserName: 'chrome',
// If outputDir is provided WebdriverIO can capture driver session logs
// it is possible to configure which logTypes to include/exclude.
// excludeDriverLogs: ['*'], // pass '*' to exclude all driver session logs
// excludeDriverLogs: ['bugreport', 'server'],
}],
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'info',
//
// Set specific log levels per logger
// loggers:
// - webdriver, webdriverio
// - #wdio/applitools-service, #wdio/browserstack-service, #wdio/devtools-service, #wdio/sauce-service
// - #wdio/mocha-framework, #wdio/jasmine-framework
// - #wdio/local-runner, #wdio/lambda-runner
// - #wdio/sumologic-reporter
// - #wdio/cli, #wdio/config, #wdio/sync, #wdio/utils
// Level of logging verbosity: trace | debug | info | warn | error | silent
// logLevels: {
// webdriver: 'info',
// '#wdio/applitools-service': 'info'
// },
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Set a base URL in order to shorten url command calls. If your `url` parameter starts
// with `/`, the base url gets prepended, not including the path portion of your baseUrl.
// If your `url` parameter starts without a scheme or `/` (like `some/path`), the base url
// gets prepended directly.
baseUrl: 'https://login.salesforce.com/',
//
// Default timeout for all waitFor* commands.
waitforTimeout: 10000,
//
// Default timeout in milliseconds for request
// if browser driver or grid doesn't send response
connectionRetryTimeout: 90000,
//
// Default request retries count
connectionRetryCount: 0,
//
// Test runner services
// Services take over a specific job you don't want to take care of. They enhance
// your test setup with almost no effort. Unlike plugins, they don't add new
// commands. Instead, they hook themselves up into the test process.
services: ['chromedriver','firefox-profile'],
// Framework you want to run your specs with.
// The following are supported: Mocha, Jasmine, and Cucumber
// see also: https://webdriver.io/docs/frameworks.html
//
// Make sure you have the wdio adapter package for the specific framework installed
// before running any tests.
framework: 'cucumber',
//
// The number of times to retry the entire specfile when it fails as a whole
// specFileRetries: 1,
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: https://webdriver.io/docs/dot-reporter.html
reporters: ['spec',['allure', {
outputDir: 'allure-results',
disableWebdriverStepsReporting: true,
disableWebdriverScreenshotsReporting: false,
useCucumberStepReporter:true
}]],
// If you are using Cucumber you need to specify the location of your step definitions.
cucumberOpts: {
require: ['./built/**/*.js'], // <string[]> (file/dir) require files before executing features
backtrace: false, // <boolean> show full backtrace for errors
requireModule: [], // <string[]> ("extension:module") require files with the given EXTENSION after requiring MODULE (repeatable)
dryRun: false, // <boolean> invoke formatters without executing steps
failFast: false, // <boolean> abort the run on first failure
format: ['pretty'], // <string[]> (type[:path]) specify the output format, optionally supply PATH to redirect formatter output (repeatable)
colors: true, // <boolean> disable colors in formatter output
snippets: true, // <boolean> hide step definition snippets for pending steps
source: true, // <boolean> hide source uris
profile: [], // <string[]> (name) specify the profile to use
strict: false, // <boolean> fail if there are any undefined or pending steps
tagExpression: '', // <string> (expression) only execute the features or scenarios with tags matching the expression
timeout: 60000, // <number> timeout for step definitions
ignoreUndefinedDefinitions: false, // <boolean> Enable this config to treat undefined definitions as warnings.
},
//
// =====
// Hooks
// =====
// WebdriverIO provides several hooks you can use to interfere with the test process in order to enhance
// it and to build services around it. You can either apply a single function or an array of
// methods to it. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
/**
* Gets executed once before all workers get launched.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
*/
// onPrepare: function (config, capabilities) {
// },
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
*/
// beforeSession: function (config, capabilities, specs) {
// },
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that are to be run
*/
before: function (_capabilities) {
// =================
// Assertion Library
// =================
const chai = require('chai');
global.expect = chai.expect;
global.assert = chai.assert;
global.should = chai.should();
require('ts-node').register({ files: true });
},
/**
* Runs before a WebdriverIO command gets executed.
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
*/
// beforeCommand: function (commandName, args) {
// },
/**
* Runs before a Cucumber feature
*/
beforeFeature: function (uri, feature, scenarios) {
scenarioCounter = 0;
},
/**
* Runs before a Cucumber scenario
*/
// beforeScenario: function (uri, feature, scenario, sourceLocation) {
// },
/**
* Runs before a Cucumber step
*/
// beforeStep: function (uri, feature, stepData, context) {
// },
/**
* Runs after a Cucumber step
*/
afterStep: function (uri, feature, { error, result, duration, passed }, stepData, context) {
if (error !== undefined) {
browser.takeScreenshot();
}
},
/**
* Runs after a Cucumber scenario
*/
afterScenario: function (uri, feature, scenario, result, sourceLocation) {
scenarioCounter += 1;
addArgument('Scenario #', scenarioCounter);
},
/**
* Runs after a Cucumber feature
*/
// afterFeature: function (uri, feature, scenarios) {
// },
/**
* Runs after a WebdriverIO command gets executed
* #param {String} commandName hook command name
* #param {Array} args arguments that command would receive
* #param {Number} result 0 - command success, 1 - command error
* #param {Object} error error object if any
*/
// afterCommand: function (commandName, args, result, error) {
// },
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* #param {Number} result 0 - test pass, 1 - test fail
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// after: function (result, capabilities, specs) {
// },
/**
* Gets executed right after terminating the webdriver session.
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {Array.<String>} specs List of spec file paths that ran
*/
// afterSession: function (config, capabilities, specs) {
// },
/**
* Gets executed after all workers got shut down and the process is about to exit. An error
* thrown in the onComplete hook will result in the test run failing.
* #param {Object} exitCode 0 - success, 1 - fail
* #param {Object} config wdio configuration object
* #param {Array.<Object>} capabilities list of capabilities details
* #param {<Object>} results object containing test results
*/
// onComplete: function(exitCode, config, capabilities, results) {
// },
/**
* Gets executed when a refresh happens.
* #param {String} oldSessionId session ID of the old session
* #param {String} newSessionId session ID of the new session
*/
//onReload: function(oldSessionId, newSessionId) {
//}
}
Since, it was an async mode in WebDriverIO, have to add the await in both of the lines as below.
Given(/^Verify the title of Salesforce web page$/, async function(){
await browser.url('https://login.salesforce.com/');
await browser.pause(10000);
});
Might be worthy trying async/await as below:
Given(/^Verify the title of Salesforce web page$/, async function(){
await browser.url('https://login.salesforce.com/');
await browser.pause(10000);
});
I hope it helps
In my case someone had globally activated fake timers (jest.useFakeTimers()).
For now, in my End to end tests, I'm just switching back to real timers (jest.useRealTimers()).

MAC Authentication failed in freeradius

I follow this Plain MAC-Auth setup guide to configure the freeradius (version 2.2.5), in order to carry out MAC Authentication. However, MAC authentication is failed with the following log message
rad_recv: Access-Request packet from host 192.168.0.7 port 59966, id=9, length=79
NAS-IP-Address = 192.168.0.7
User-Name = "34:76:C5:57:0F:A3"
User-Password = "34:76:C5:57:0F:A3"
# Executing section authorize from file /etc/freeradius/sites-enabled/default
+group authorize {
++[preprocess] = ok
++policy rewrite.calling_station_id {
+++? if ((Calling-Station-Id) && "%{Calling-Station-Id}" =~ /^%{config:policy.mac-addr}$/i)
?? Evaluating (Calling-Station-Id) -> FALSE
? Skipping ("%{Calling-Station-Id}" =~ /^%{config:policy.mac-addr}$/i)
+++? if ((Calling-Station-Id) && "%{Calling-Station-Id}" =~ /^%{config:policy.mac-addr}$/i) -> FALSE
+++else else {
++++[noop] = noop
+++} # else else = noop
++} # policy rewrite.calling_station_id = noop
[authorized_macs] expand: %{Calling-Station-Id} ->
++[authorized_macs] = noop
++? if (!ok)
? Evaluating !(ok) -> TRUE
++? if (!ok) -> TRUE
++if (!ok) {
+++[reject] = reject
++} # if (!ok) = reject
+} # group authorize = reject
Using Post-Auth-Type REJECT
WARNING: Unknown value specified for Post-Auth-Type. Cannot perform requested action.
Delaying reject of request 0 for 1 seconds
Going to the next request
Waking up in 0.9 seconds.
Sending delayed reject for request 0
Sending Access-Reject of id 9 to 192.168.0.7 port 59966
Waking up in 4.9 seconds.
Cleaning up request 0 ID 9 with timestamp +30
Ready to process requests.
From the above log, the problem seems to be unable to get the "Calling-Station-Id" value. Is this a freeradius configuration problem? And anyone know how to solve it?
on the account section of the radius config add
update request {
Called-Station-Id += &NAS-Port-Id
}
and in the post-auth section add
update reply {
Called-Station-Id += &NAS-Port-Id
}