I'm trying to setup a Selenium Grid in Hashicorp Nomad but I don't know how to make the networking work.
What I want is to have one selenium-hub and 8 chrome-nodes. I found that I can only have multiple allocations of the node-task if I put the node-task into its own group.
Running it locally, I would start the grid like this:
docker network create grid
docker run -d -p 4442-4444:4442-4444 --net grid --name selenium-hub selenium/hub:4
docker run -d --net grid -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 -v /dev/shm:/dev/shm selenium/node-chrome:4
docker run -d --net grid -e SE_EVENT_BUS_HOST=selenium-hub -e SE_EVENT_BUS_PUBLISH_PORT=4442 -e SE_EVENT_BUS_SUBSCRIBE_PORT=4443 -v /dev/shm:/dev/shm selenium/node-chrome:4
...
How do I need to configure Network for this to work?
My current job looks like this:
job "selenium" {
datacenters = ["dc1"]
type = "service"
group "selenium_hub" {
network {
}
task "selenium_hub" {
driver = "docker"
config {
image = "selenium/hub:4"
}
}
}
group "selenium_nodes" {
count = 8
network {
}
task "selenium_node" {
driver = "docker"
env {
SE_EVENT_BUS_HOST = "selenium-hub"
SE_EVENT_BUS_PUBLISH_PORT = "4442"
SE_EVENT_BUS_SUBSCRIBE_PORT = "4443"
}
config {
image = "selenium/node-chrome:4"
}
}
}
}
If anyone comes across this, I've got it working like this:
job "selenium" {
datacenters = ["dc1"]
type = "service"
group "selenium_hub" {
network {
mode = "host"
}
task "selenium_hub" {
driver = "docker"
config {
image = "selenium/hub:3.141.59-20201010"
network_mode = "host"
}
}
}
group "selenium_nodes" {
count = 8
network {
mode = "host"
port "http" {}
}
task "selenium_node" {
driver = "docker"
env {
HUB_HOST = "localhost"
SE_OPTS = "-port ${NOMAD_PORT_http}"
}
config {
network_mode = "host"
image = "selenium/node-chrome:3.141.59-20201010"
}
}
}
}
I used Selenium 3 instead of 4 because we also had other issues with 4.
Since I've used host network and all nodes use the port 5555 I needed to pass a dynamic port (http) to Selenium via SE_OPTS environment variable.
Related
While working in Selenoid with Docker, in docker logs I can see the error as " [/usr/bin/selenoid: browsers config: read error: open /etc/selenoid/browsers.json: no such file or directory]" . My volume mapping is "-v $PWD/config/:/etc/selenoid/:ro" . if I do "cat $PWD/config/browsers.json" , my browsers.json content is opened and same I can validate manually as well that file is present .
Below commands I am using . These commands I am executing directly through Jenkins . In My local same exact command is working fine , but in jenkins its giving error .
mkdir -p config
cat <$PWD/config/browsers.json
{
"firefox": {
"default": "57.0",
"versions": {
"57.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"58.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"59.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
}
}
}
}
EOF
chmod +rwx $PWD/config/browsers.json
cat $PWD/config/browsers.json
docker pull aerokube/selenoid:latest
docker pull aerokube/cm:latest
docker pull aerokube/selenoid-ui:latest
docker pull selenoid/video-recorder:latest-release
docker pull selenoid/vnc_chrome:92.0
docker pull selenoid/vnc_firefox:90.0
docker stop selenoid ||true
docker rm selenoid ||true
docker run -d --name selenoid -p 4444:4444 -v /var/run/docker.sock:/var/run/docker.sock
-v $PWD/config/:/etc/selenoid/:ro aerokube/selenoid
The error is self-explaining: you don't have browsers.json in directory you are mounting to /etc/selenoid inside container. I would recommend using absolute paths instead of $PWD variable.
I am building a jenkins job that needs to run some tests against a selenium server. I have have defined a stage where I start the selenium server in 1 container, and after that I want to run my tests from another container, against the selenium server.
The selenium server seems to start fine, but after that the job just hangs, displaying a spinner:
This is what my pipeline script looks like:
agent {
kubernetes {
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: node
image: node:12.14.1
- name: selenium
image: vvoyer/selenium-standalone
"""
}
}
stages {
stage('Checkout codebase') {
// do checkout
}
stage('start selenium') {
steps {
container('selenium') {
sh '''
selenium-standalone install
selenium-standalone start //script hangs after this command
'''
}
}
}
stage('test') {
steps {
container('node') {
//build test project & run tests
}
}
}
}
}
Can anyone tell me what I am doing wrong here and how I can fix it?
Yeah Thanks, solved similar error by adding '&' at the end of shell script step inside Jenkinsfile.
ex:
sh label: '',script: 'docker-compose -f TestAutomation_UI_API/docker-compose-v3.yml up --scale chrome=3 &'
I have Jenkins Declared Pipeline with two separate Docker containers.
And i need to use github commit message and other git data at Post part.
Because of two different agents for two docker containers i have agent none at top of the pipeline.
I use some trick to get GitHub commit message. I don't know how to get it simple way.
environment {
COMMIT_TEXT = sh (
script: 'git log --format="full" -1 ${GIT_COMMIT}',
returnStdout: true
).trim()
TextComment = 'Text from Variable'
}
I can't use environment with sh inside a top level with agent none,
so i need to pass variables to Past part other way.
Any ideas?
Thanks.
pipeline {
agent none
/* --- i can not use "sh" there becouse in can not to be ececuted with 'agent none'
environment {
COMMIT_TEXT = sh (
script: 'git log --format="full" -1 ${GIT_COMMIT}',
returnStdout: true
).trim()
TextComment = 'Text from Variable'
}
*/
stages {
stage('build container up') {
agent {
docker {
image 'container:local'
}
/* --- there is no reason to put variable there, becouse it will be dead with the container before Post processing.
environment {
COMMIT_TEXT = sh (
script: 'git log --format="full" -1 ${GIT_COMMIT}',
returnStdout: true
).trim()
TextComment = 'Text from Variable'
}
*/
}
stages {
stage('Build') {
steps{
git branch: 'testing-jenkinsfile', url: 'https://github.com/...git'
}
}
}
}
stage('Build image') {
steps {
script {
docker.build("walletapi:local")
}
}
}
stage('Run service container'){
agent {label 'master'}
steps {
sh 'docker run -it -d --name container01 \
container01:local'
}
}
}
}
post {
always {
script {
sh 'env'
}
}
}
I would like to add some e2e tests for my vue.js application and run them in the pipeline.
The corresponding part in my gitlab-ci.yml looks like this:
e2e:
image: node:8
before_script:
- npm install
services:
- name: selenium/standalone-chrome
alias: chrome
stage: testing
script:
- cd online-leasing-frontend
- npm install
- npm run test:e2e
And my nightwatch.js config:
{
"selenium": {
"start_process": false
},
"test_settings": {
"default": {
"selenium_port": 4444,
"selenium_host": "chrome"
}
}
}
Is “selenium_host”: “chrome” the correct way of setting the host to the selenium service?
I get the following error indicating that my e2e test can’t connect to the selenium service:
Connection refused! Is selenium server started?
Any tips?
The problem was that according to this issue, Gitlab CI is using the Kubernetes Executor instead of the Docker Executor which is mapping all Services to 127.0.0.1. After setting the selenium_host to this address, everything worked.
{
"selenium": {
"start_process": false
},
"test_settings": {
"default": {
"selenium_port": 4444,
"selenium_host": "127.0.0.1",
}
}
}
On the Selenium Repo it says:
"When executing docker run for an image with Chrome or Firefox please either mount -v /dev/shm:/dev/shm or use the flag --shm-size=2g to use the host's shared memory."
I don't know gitlab-ci so well, but I'm afraid it is not possible to add this as parameter to a service.
I'm running selenium grid through docker compose and targeting the different versions through my desired capabilities.
I have a requirement to test multiple browser versions however at the moment I need to target the full version i.e chrome versions "63.0.3239.132" or "64.0.3282.140"
I want to be able to specify just 63 or 64 etc so that my docker setup can update regularly without the need to update the code.
Is there a way to do this through desired capabilities?
below is my docker compose file
version: '2'
services:
seleniumhub:
image: selenium/hub:3.9.1-actinium
ports:
- 4444:4444
chrome64:
image: selenium/node-chrome-debug:3.9.1-actinium
depends_on:
- seleniumhub
environment:
HUB_PORT_4444_TCP_ADDR: seleniumhub
HUB_PORT_4444_TCP_PORT: 4444
ports:
- 5900
chrome63:
image: selenium/node-chrome-debug:3.8.1-erbium
depends_on:
- seleniumhub
environment:
HUB_PORT_4444_TCP_ADDR: seleniumhub
HUB_PORT_4444_TCP_PORT: 4444
ports:
- 5900
below is how i set up my desired capabilities.
ICapabilities caps = new DesiredCapabilities();
//generic Desktop Browser config':
if (DriverConfig.BrowserName != "")
{ ((DesiredCapabilities)caps).SetCapability("browserName", _browserName); }
else
{ }
if (DriverConfig.Version != "")
{ ((DesiredCapabilities)caps).SetCapability("version", _version); }
else
{ }
if (DriverConfig.Platform != "")
{ ((DesiredCapabilities)caps).SetCapability("platform", _platform); }
else
{ }
If you specify -browser browserName="chrome,version=63" in cmd line when register selenium node to sellenium hub, Then you can specify browerVersion: 63 in capabilities in test script.
So your problem dependent do you have chance to specify -browser in registry cmd when use docker setup grid?