Webdriverio Selenium Standalone Service v6 onwards - unable to overwrite the hostname for private Selenium backend - webdriver-io

Webdriverio Test Runner has an option
- if you are using a private Selenium backend, you should define the hostname, port, and path here.
hostname: 'localhost',
port: 4444,
path: '/',
Since version: "#wdio/selenium-standalone-service": "^6.0.0"
This "hostname" is unchangeable and always stay as localhost. It seems to autodetect that it should be localhost only and does not refer to config at all i.e. even if I update manually in wdio.conf.js as
hostname: 'selenium-hub',
port: 4445,
path: '/',
Upon execution, still the hostname stays 'localhost' instead of being 'selenium-hub' and port stays as '4444' instead of '4445'
In previous version the command line value with --hostname was getting overwritten successfully as required
i.e. ./node_modules/.bin/wdio wdio.conf.js --hostname 'selenium-hub'
would pass selenium-hub as hostname successfully....
anyone experiencing similar issue ?

add the hostname, port, and path to the capabilities array.
instead of:
hostname: '{ unique ip address}',
port: { port number },
path: {'/'},
protocol: '{http' || 'https'},
capabilities: [{
maxInstances: 5,
browserName: 'chrome',
}],
Do this:
capabilities: [{
maxInstances: 5,
browserName: 'chrome',
hostname: '{ unique ip address}',
port: { port number },
path: {'/'},
protocol: '{http' || 'https'},
}],

Related

How can I create router and load balance service added to traefik via consulCatalog?

I have nextcloud running on bare metal 2 nodes:
node1: 192.168.1.10
node2: 192.168.1.11
In the consul I have defined nextcloud service as such on both the nodes:
{
"service": {
"name": "nextcloud",
"tags": ["nextcloud", "traefik"],
"port": 80,
"check": {
"tcp": "localhost:80",
"args": ["ping", "-c1", "127.0.0.1"],
"interval": "10s",
"status": "passing",
"success_before_passing": 3,
"failures_before_critical": 3
}
}
now this shows up in consul fine:
static config: traefik.yaml
global:
# Send anonymous usage data
sendAnonymousUsage: true
api:
dashboard: true
debug: true
log:
level: DEBUG
entryPoints:
http:
address: ":80"
https:
address: ":443"
serversTransport:
insecureSkipVerify: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
directory: "/config/"
watch: true
consulCatalog:
defaultRule: "Host(`{{ .Name }}.sub.mydomain.com`)"
endpoint:
address: http://127.0.0.1:8500
certificatesResolvers:
linode:
acme:
caServer: https://acme-staging-v02.api.letsencrypt.org/directory
email: myemail#domain.com
storage: acme.json
dnsChallenge:
provider: linode
resolvers:
- "1.1.1.1:53"
- "1.0.0.1:53"
and then dynamic /config/config.yaml:
http:
routers:
nextcloud#consulCatalog:
entryPoints:
- "https"
rule: "Host(`home.sub.mydomain.com`) && Path(`/nextcloud`)"
tls:
certResolver: linode
service: nextcloud
services:
nextcloud:
loadBalancer:
servers:
- url: http://192.168.1.10
- url: http://192.168.1.11
passHostHeader: true
but this shows up as file provider with TLS in instead in addtion to exisiting consulcatalog provider.
and not IP or domain mapped.
actual consulcatalog provider showing up but no tls
I am wondering why my dynamic configuration in http did not updated the nextcloud#consulcatalog and set the https entrypoint.
Any help will be greatly appreciated, I am struggling very hard to get this to work.
I have tried following the docs on traefik but its very confusing specially on the consulcatalog part.
Your configuration is showing up as being defined via the file provider because you are statically defining it in the file at /config/config.yaml.
In order to dynamically retrieve this configuration from Consul, you should not be defining the static config file and instead configure tags on the Consul service registrations that will instruct Traefik to route traffic to your service.
For example:
{
"service": {
"name": "nextcloud",
"tags": [
"nextcloud",
"traefik.enable=true",
"traefik.http.routers.nextcloud.entrypoints=https",
"traefik.http.routers.nextcloud.rule=(Host(`home.sub.mydomain.com`) && Path(`/nextcloud`))",
"traefik.http.routers.nextcloud.tls.certresolver=linode",
"traefik.http.services.nextcloud.loadbalancer.passhostheader=true"
],
"port": 80,
"check": {
"tcp": "localhost:80",
"args": [
"ping",
"-c1",
"127.0.0.1"
],
"interval": "10s",
"status": "passing",
"success_before_passing": 3,
"failures_before_critical": 3
}
}
}
More info can be found on the Routing Configuration docs for Traffic's Consul catalog provider.

what are the valid values for 'runner' in wdio.conf.js file?

for Webdriver.io what other values can runner take in wdio.conf.js besides runner: 'local'? Any examples?
Thanks
OK, found out from the official WDIO chat that only local is supported for now.
Perhaps such an example will help you?from my work example
exports.config = {
hostname: "some-test",
port: 4444,
path: "/wd/hub",
specs: ["./tests/*.ts"],
sync: true,
services: ["selenium-standalone"],
capabilities: [
{
browserName: "chrome"
}
],
baseUrl: "http://my-url",
framework: "mocha",
mochaOpts: {
ui: "bdd",
timeout: 10000
}
};

Selenium isn't able to reach a docker container with docker-compose run

I have the following docker-compose.yml which starts a chrome-standalone container and a nodejs application:
version: '3.7'
networks:
selenium:
services:
selenium:
image: selenium/standalone-chrome-debug:3
networks:
- selenium
ports:
- '4444:4444'
- '5900:5900'
volumes:
- /dev/shm:/dev/shm
user: '7777:7777'
node:
image: node_temp:latest
build:
context: .
target: development
args:
UID: '${USER_UID}'
GID: '${USER_GID}'
networks:
- selenium
env_file:
- .env
ports:
- '8090:8090'
volumes:
- .:/home/node
depends_on:
- selenium
command: >
sh -c 'yarn install &&
yarn dev'
I'm running the containers as follows:
docker-compose up -d selenium
docker-compose run --service-ports node sh
and starting the e2e from within the shell.
When running the e2e tests, selenium can be reached from the node container(through: http://selenium:4444), but node isn't reachable from the selenium container.
I have tested this by VNC'ing into the selenium container and pointing the browser to: http://node:8090. (The node container is reachable on the host however, through: http://localhost:8090).
I first thought that docker-compose run doesn't add the running container to the proper network, however by running docker network inspect test_app I get the following:
[
{
"Name": "test_app_selenium",
"Id": "df6517cc7b6446d1712b30ee7482c83bb7c3a9d26caf1104921abd6bbe2caf68",
"Created": "2019-06-30T16:08:50.724889157+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.31.0.0/16",
"Gateway": "172.31.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"8a76298b237790c62f80ef612debb021549439286ce33e3e89d4ee2f84de3aec": {
"Name": "test_app_node_run_78427bac2fd1",
"EndpointID": "04310bc4e564f831e5d08a0e07891d323a5953fa936e099d20e5e384a6053da8",
"MacAddress": "02:42:ac:1f:00:03",
"IPv4Address": "172.31.0.3/16",
"IPv6Address": ""
},
"ef087732aacf0d293a2cf956855a163a081fc3748ffdaa01c240bde452eee0fa": {
"Name": "test_app_selenium_1",
"EndpointID": "24a597e30a3b0b671c8b19fd61b9254bea9e5fcbd18693383d93d3df789ed895",
"MacAddress": "02:42:ac:1f:00:02",
"IPv4Address": "172.31.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "selenium",
"com.docker.compose.project": "test_app",
"com.docker.compose.version": "1.24.1"
}
}
]
Which shows both containers running on the "selenium" network. I'm not sure however if the node container is properly aliased on the network and if this is proper behaviour.
Am I missing some config here?
Seems like docker-compose run names the container differently to evade the service namespace as noted in docker-compose.yml. http://node:8090 was therefore not reachable.
I solved this by adding a --name flag as follows:
docker-compose run --service-ports --name node node sh
EDIT:
It took me a while to notice, but I was overcomplicating the implementation by a lot. The above docker-compose.yml can be simplified by adding host networking. This simply exposes all running containers on localhost and makes them reachable on localhost by their specified ports. Considering that I don't need any encapsulation (it's meant for dev), the following docker-compose.yml sufficed:
version: '3.7'
services:
selenium:
image: selenium/standalone-chrome:3
# NOTE: port definition is useless with network_mode: host
network_mode: host
user: '7777:7777'
node:
image: node_temp:latest
build:
context: .
target: development
args:
UID: '${USER_UID}'
GID: '${USER_GID}'
network_mode: host
env_file:
- .env
volumes:
- .:/home/node
command: >
sh -c 'yarn install &&
yarn dev'

theintern calling /__intern/client.htm in URL

I want to run intern tests locally with selenium-standalone both of which i installed through npm.
when i go to run the tests -> "./node_modules/.bin/intern-runner" config=./pmictests/test/bit/GAT/internEx/intern
the browser starts but the url goes to http://localhost:8585/__intern/client.html?config=.%2Fpmictests%2Ftest%2Fbit%2FGAT%2FinternEx%2Fintern&basePath
As in the _intern/client.html? is not what i want
why is this happening? i'm trying to get my head around it but been stuck on this problem for a while.
my config file looks like this:
define({
proxyPort: 9515,
proxyUrl: 'http://localhost:8585/',
tunnel: 'NullTunnel',
useSauceConnect: false,
capabilities: {
'fixSessionCapabilities' : false,
'selenium-version': '2.35.0',
'idle-timeout': 36
},
environments: [
{ browserName: 'chrome' }
],
maxConcurrency: 3,
useSauceConnect: false,
webdriver: {
host: 'localhost',
port: 4444
},
suites: [ './tests/test/' ],
excludeInstrumentation: /^(?:tests|node_modules)\//
});
That URL is for running unit tests. When you run intern-runner, it automatically loads client.html to run any unit test suites listed in suites. Once the unit tests are finished, Intern runs any functional tests listed in functionalSuites (which will load their own URLs).

Running intern with PhantomJS: window is undefined

I've followed all the steps described here: https://github.com/theintern/intern/wiki/Using-Intern-with-PhantomJS
My intern config is ~ as follows:
define({
proxyPort: 9000,
proxyUrl: 'http://localhost:9000/',
environments: [
{ browserName: 'phantom' }
],
maxConcurrency: 3,
useSauceConnect: false,
webdriver: {
host: 'localhost',
port: 4444
},
reporters: ['runner'],
useLoader: {
'host-node': 'dojo/dojo',
'host-browser': 'node_modules/dojo/dojo.js'
},
loader: {
packages: [
{ name: 'myApp', location: '...' }
],
baseUrl: '...',
paths: {...}
},
suites: [
'test/hello'
],
functionalSuites: [],
excludeInstrumentation: /(^test(\/|\\)|reporters|node_modules)/
});
I run phantomJS with
.\node_modules\.bin\phantomjs --webdriver 4444 --webdriver-loglevel='debug'
and it listens on 4444.
I even disabled Windows Firewall, but still I get
ReferenceError: window is not defined
at ***.js:348:142
at Function.vm.runInThisContext (***\node_modules\intern\node_modules\istanbul\lib\hook.js:163:16)
at ***\node_modules\intern\node_modules\dojo\dojo.js:757:8
at fs.js:266:14
at Object.oncomplete (fs.js:107:15)
as though Intern is running on node, not in Phatom. Phantom's console is also completely silent.
What am I missing? Or is there a way to debug Intern's actions? Thx
OK, I've finally figured this out.
I've been running intern using
.\node_modules\.bin\intern-client config=test/intern
while it should've been
.\node_modules\.bin\intern-runner config=test/intern
Thing is that intern-runner and intern-client are two different applications, one is for running with browsers via WebDriver, the other one for running with Node. It didn't catch my eye even though I was reading and re-reading the docs much more than once. Probably the distinction should be highlighted there.
Hope this helps someone )