Running Nightwatch test inside docker - Selenium server doesn't start - selenium

I'm trying to integrate my e2e test in our CI pipeline.
We are using Jenkins as CI and we build a docker image and all the tests are running from the docker.
When trying to run the e2e tests I receive an error stating: "Connection refused! Is selenium server started?"
After building the image and installing all the npm packages I use this command in the Jenkins file:
run_in_stage('End2End test', {
image.inside("-u root") {
sh '''
npm run build:dev
http-server ./dist -p 3001 -s &
xvfb-run --server-args="-screen 0 1600x1200x24" npm run test:e2e:smoke
'''
}
})
In the docker file I set up Chrome with xvfb.
RUN \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list && \
apt-get update && \
apt-get install -y xvfb google-chrome-stable
This is how I set up the selenium in the nightwatch.conf.js file:
const seleniumServer = require('selenium-server-standalone-jar');
const chromeDriver = require('chromedriver');
selenium: {
start_process: true,
server_path: seleniumServer.path,
host: '127.0.0.1',
port: 4444,
cli_args: {
'webdriver.chrome.driver': chromeDriver.path
}
},

Related

Generation Allure Reporter

I have the following Jenkins Pipline
node('api50analysis') {
def err = null
currentBuild.result = "SUCCESS"
try {
dir ('/var/www/TMS/tests') {
stage 'Delete old report'
sh 'rm -Rf ./allure-report'
sh 'rm -Rf ./allure-results'
stage 'Get AutoTest from git'
git credentialsId: '9d6d49c0-9e6c-4e81-8893-cb5993b6fd83', url: '$git_url_test', branch: '$branch_test'
stage 'Install dependencies'
sh 'sudo chown -R $(whoami) /var/www/TMS/tests'
sh 'npm install'
stage 'WEB tests as admin'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite adminSuites'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite aidLimit'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite limit'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite configuration'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite merchant'
sh 'npx wdio run wdio.conf.ts --suite terminal'
sh 'npx wdio run wdio.conf.ts --suite deviceType'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite device'
sh 'npm run clearDB'
}
}
catch (caughtError) {
err = caughtError
currentBuild.result = "FAILURE"
build job: 'telegram_job',
parameters: [string(name: 'msg',
value: "----------\n"+
"Job name: ${JOB_NAME}\n"+
"Status: $currentBuild.result\n"+
"Build number: ${BUILD_NUMBER}\n"+
"Build URL ${BUILD_URL}\n"+
"----------")]
} finally {
stage 'Allure'
script {allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'target/allure-results']]
])}
if (err) {
throw err
}
}
}
The Allure report is always generated, however, if at least one test has failed, it generates an empty report, instead of generating it with data on which tests passed, which tests failed and attach a screenshot to the place where the test fell (configured in the Webdriver IO config )
What could be the problem? The server where autotests run is managed by CentOS 7 enter image description here. Here is an example of a report when all tests pass enter image description here
In the console in this case
[Pipeline] stage (Allure)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Allure
Proceeding
[Pipeline] script
[Pipeline] {
[Pipeline] allure
[TMS_AUTOTEST] $ /opt/jenkins/tools/ru.yandex.qatools.allure.jenkins.tools.AllureCommandlineInstallation/Allure/bin/allure generate -c -o /opt/jenkins/workspace/TMS_AUTOTEST/allure-report
allure-results does not exist
Report successfully generated to /opt/jenkins/workspace/TMS_AUTOTEST/allure-report
Allure report was successfully generated.
Creating artifact for the build.
Artifact was added to the build.
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 3
Finished: FAILURE

Selenoid[/usr/bin/selenoid: browsers config: read error: open /etc/selenoid/browsers.json: no such file or directory]

While working in Selenoid with Docker, in docker logs I can see the error as " [/usr/bin/selenoid: browsers config: read error: open /etc/selenoid/browsers.json: no such file or directory]" . My volume mapping is "-v $PWD/config/:/etc/selenoid/:ro" . if I do "cat $PWD/config/browsers.json" , my browsers.json content is opened and same I can validate manually as well that file is present .
Below commands I am using . These commands I am executing directly through Jenkins . In My local same exact command is working fine , but in jenkins its giving error .
mkdir -p config
cat <$PWD/config/browsers.json
{
"firefox": {
"default": "57.0",
"versions": {
"57.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"58.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"59.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
}
}
}
}
EOF
chmod +rwx $PWD/config/browsers.json
cat $PWD/config/browsers.json
docker pull aerokube/selenoid:latest
docker pull aerokube/cm:latest
docker pull aerokube/selenoid-ui:latest
docker pull selenoid/video-recorder:latest-release
docker pull selenoid/vnc_chrome:92.0
docker pull selenoid/vnc_firefox:90.0
docker stop selenoid ||true
docker rm selenoid ||true
docker run -d --name selenoid -p 4444:4444 -v /var/run/docker.sock:/var/run/docker.sock
-v $PWD/config/:/etc/selenoid/:ro aerokube/selenoid
The error is self-explaining: you don't have browsers.json in directory you are mounting to /etc/selenoid inside container. I would recommend using absolute paths instead of $PWD variable.

Build is failing Error forwarding the new session Empty pool of VM for setup Capabilities browserName: chrome

So I am running my tests using Jenkins, the Pipeline script is doing all for me but the build is failing at the point where it's saying:
WebDriver::debugWebDriverLogs method has been called when webDriver is not set for each tests and at the end of the log is also saying: [Facebook\WebDriver\Exception\UnknownServerException] Error forwarding the new session Empty pool of VM for setup Capabilities {browserName: chrome}
I have tried in changing this part:
just after I pull:
sh 'docker pull selenium/standalone-chrome'
from this:
'docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-dubnium'
in to:
'docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-palladium'
The pipeline script is this one
node {
stage('Pull latest Docker repo') {
git credentialsId: 'xxxx',
url: 'git#bitbucket.org:my/test-app.git'
}
stage('Install app') {
sh 'docker-compose down'
sh 'docker-compose build --no-cache mysql'
echo "Will deploy from ${branch}";
echo "BuildApp Param ${buildapp}";
if (params.buildapp == true){
sh 'docker-compose build --no-cache --build-arg BRANCH=${branch} apache'
}
sh 'docker-compose up -d'
}
stage('Set up Selenium') {
try {
sh 'docker rm selenium -f'
}
catch(exc) {
echo 'No selenium container running'
}
sh 'docker pull selenium/standalone-chrome'
sh 'docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-dubnium'
}
stage('Load tests') {
dir("../") {}
git credentialsId: 'xxxx',
url: 'git#bitbucket.org:mytests.git'
}
stage('Run tests') {
sh 'composer install'
sh 'vendor/bin/codecept --debug run --steps tests/jenkins'
}
}```
Expected results: for the build to run, as it is running over weekend and on Saturday morning it was OK, passing all the tests, Sunday and Monday morning failing with those errors and since then I am trying to figure out what is happening.
Here is the output in Jenkins:
```Started by user
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/jobs/Autotest/workspace
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Pull latest Docker repo)
[Pipeline] git
using credential xxxx
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url git#my/test-app.git
# timeout=10
Fetching upstream changes from git#bitbucket.org:my/test-app.git
> git --version # timeout=10
using GIT_SSH to set credentials Jenkins Portal Tests Repo
> git fetch --tags --progress git#bitbucket.org:mydocker.git +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 20cdc207dc87e736983d6b28b5a81853c7c37f70 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 20cdc207dc87e736983d6b28b5a81853c7c37f70
> git branch -a -v --no-abbrev # timeout=10
> git branch -D master # timeout=10
> git checkout -b master 20cdc207dc87e736983d6b28b5a81853c7c37f70
Commit message: "Got rid of the WORKDIR"
> git rev-list --no-walk 20cdc207dc87e736983d6b28b5a81853c7c37f70 # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Install app)
[Pipeline] sh
+ docker-compose down
Stopping workspace_apache_1 ...
Stopping workspace_mysql_1 ...
[2A[2K
Stopping workspace_apache_1 ... [32mdone[0m
[2B[1A[2K
Stopping workspace_mysql_1 ... [32mdone[0m
[1BRemoving workspace_apache_1 ...
Removing workspace_mysql_1 ...
[1A[2K
Removing workspace_mysql_1 ... [32mdone[0m
[1B[2A[2K
Removing workspace_apache_1 ... [32mdone[0m
[2BRemoving network workspace_frontend
Removing network workspace_backend
[Pipeline] sh
+ docker-compose build --no-cache mysql
Building mysql
Step 1/2 : FROM mysql:5.7.24
---> ba7a93aae2a8
Step 2/2 : ADD mysql.cnf /etc/mysql/mysql.conf.d/mysql.cnf
---> ba3aa48cc1b5
Successfully built ba3aa48cc1b5
Successfully tagged workspace_mysql:latest
[Pipeline] echo
Will deploy from feature/jenkins-autotests
[Pipeline] echo
BuildApp Param false
[Pipeline] sh
+ docker-compose up -d
Creating network "workspace_frontend" with the default driver
Creating network "workspace_backend" with the default driver
Creating workspace_mysql_1 ...
Creating workspace_mysql_1
[1A[2K
Creating workspace_mysql_1 ... [32mdone[0m
[1BCreating workspace_apache_1 ...
Creating workspace_apache_1
[1A[2K
Creating workspace_apache_1 ... [32mdone[0m
[1B[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Set up Selenium)
[Pipeline] sh
+ docker rm selenium -f
selenium
[Pipeline] sh
+ docker pull selenium/standalone-chrome
Using default tag: latest
latest: Pulling from selenium/standalone-chrome
Digest: sha256:e10140556650edfd1000bba6c58053f4c97c0826ef785858ed1afa6f82284e23
Status: Image is up to date for selenium/standalone-chrome:latest
[Pipeline] sh
+ docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-dubnium
WARNING: Published ports are discarded when using host network mode
3f379a0da28bfcf7f7912a05d757b6115773292c1a426199325f75cfae97988d
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Load tests)
[Pipeline] dir
Running in /var/lib/jenkins/jobs/Autotest
[Pipeline] {
[Pipeline] }
[Pipeline] // dir
[Pipeline] git
using credential xxxx
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url git#bitbucket.org:my/test-app.git
# timeout=10
Fetching upstream changes from git#bitbucket.org:app/portal-tests.git
> git --version # timeout=10
using GIT_SSH to set credentials Jenkins Portal Tests Repo
> git fetch --tags --progress git#bitbucket.org:app/portal-tests.git +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 1b4abdc36d9ca1339770d92efc5fad172cca3bc7 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 1b4abdc36d9ca1339770d92efc5fad172cca3bc7
> git branch -a -v --no-abbrev # timeout=10
> git branch -D master # timeout=10
> git checkout -b master 1b4abdc36d9ca1339770d92efc5fad172cca3bc7
Commit message: "Commented out the test to not run anymore"
> git rev-list --no-walk 1b4abdc36d9ca1339770d92efc5fad172cca3bc7 # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run tests)
[Pipeline] sh
+ composer install
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Nothing to install or update
Generating autoload files
[Pipeline] sh
+ vendor/bin/codecept --debug run --steps tests/jenkins
Codeception PHP Testing Framework v2.5.2
Powered by PHPUnit 6.5.13 by Sebastian Bergmann and contributors.
Running with seed:
[1mJenkins Tests (48) [22m-----------------------------------------
Modules: [32mWebDriver, \Helper\Acceptance[39m
------------------------------------------------------------
[35;1mtest1:[39;22m test1
Signature: [32mtest1[39m
Test: [32mtests/acceptance/test1.php[39m
[33mScenario --[39m
[36m WebDriver::debugWebDriverLogs method has been called when webDriver is not set[39m
[36m WebDriver::_saveScreenshot method has been called when webDriver is not set[39m
[36m WebDriver::_savePageSource method has been called when webDriver is not set[39m
[36m Screenshot and page source were saved into '/var/lib/jenkins/jobs/Autotest/workspace/tests/_output/' dir[39m
[31;1m ERROR [39;22m
[37;41;1m Test [39;49;22m tests/acceptance/test1.php
[37;41;1m [39;49;22m
[37;41;1m [Facebook\WebDriver\Exception\UnknownServerException] Error forwarding the new session Empty pool of VM for setup Capabilities {browserName: chrome} [39;49;22m
[37;41;1m [39;49;22m
#1 /var/lib/jenkins/jobs/Autotest/workspace/vendor/facebook/webdriver/lib/Exception/WebDriverException.php:114
#2 /var/lib/jenkins/jobs/Autotest/workspace/vendor/facebook/webdriver/lib/Remote/HttpCommandExecutor.php:326
#3 /var/lib/jenkins/jobs/Autotest/workspace/vendor/facebook/webdriver/lib/Remote/RemoteWebDriver.php:126
#4 /var/lib/jenkins/jobs/Autotest/workspace/vendor/symfony/event-dispatcher/EventDispatcher.php:212
#5 /var/lib/jenkins/jobs/Autotest/workspace/vendor/symfony/event-dispatcher/EventDispatcher.php:44```
I think I sorted out:
sudo netstat -tulpn | grep :4444
That is the port that I am using and the result of it was:
tcp6 0 0 120.0.1.1:4444 :::* LISTEN 1739/java
I killed the process by doing:
sudo kill 1739
And now tests are running again.
Thank you for the help.
I think it has something to do with this part:
+ docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-dubnium
WARNING: Published ports are discarded when using host network mode
But the funny thing is that I haven't touch it since the last passed build.

Webdriver instances not created for custom protractor.conf file

I want to integrate my E2E suite in Travis, so I followed this article. As mentioned in the article I've created a custom protractor.ci.conf.js file of the Travis build. I've placed this file inside my e2e folder (path: e2e/protractor.ci.conf.js).
The only difference in my custom e2e/protractor.ci.conf.js and angular generated protractor.conf.js files is the value in args property displayed below.
e2e/protractor.ci.conf.js
chromeOptions: {
args: [
'--headless',
'window-size=1920,1080'
]
}
protractor.conf.js
const SpecReporter = require('jasmine-spec-reporter').SpecReporter;
exports.config = {
allScriptsTimeout: 11000,
specs: [
'./e2e/**/*.e2e-spec.ts'
],
capabilities: {
shardTestFiles: true,
maxInstances: 2,
'browserName': 'chrome',
chromeOptions: {
args: ['--start-maximized']
}
},
directConnect: true,
baseUrl: 'localhost:4000/',
framework: 'jasmine',
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 300000,
print: function () {
}
},
useAllAngular2AppRoots: true,
onPrepare: function () {
jasmine.getEnv().addReporter(new SpecReporter());
require('ts-node').register({
project: 'e2e/tsconfig.json'
});
}
};
In my package.json file there are 2 scripts one for running tests locally and one on Travis.
Package.json (at the same level where protractor.conf.js is located)
"scripts": {
...
"test": "ng test --watch=false",
"pree2e": "webdriver-manager update",
"e2e": "concurrently --kill-others \"ng e2e --port=4000\" \"npm run _server:run\"",
"e2e:ci": "concurrently --kill-others \"ng e2e --port=4000 --protractor-config=e2e/protractor.ci.conf.js\" \"npm run _server:run\"",
"_server:run": "tsc -p ./server && concurrently \"tsc -w -p ./server\" \"nodemon dist/server/index.js\" ",
...
},
.travis.yml
branches:
only:
- staging
- prod
- functional-testing
script:
...
- if [[ $TRAVIS_COMMIT_MESSAGE == *"[skip e2e]"* ]]; then echo "skipping E2E test"; else npm run e2e:ci; fi
...
before_deploy:
- sed -i '/dist/d' .gitignore
- git add . && git commit -m "latest build"
- cd $TRAVIS_BUILD_DIR/dist
PROBLEM
When simply running npm run e2e, every test is working fine. But when I'm using npm run e2e:ci command scripts hangs and no instance of WebDriver runs.
I/launcher — Running 0 instances of WebDriver
is coming instead of 1 or 2 instances.
That's because since you made a new config file and apparently placed in the folder
/e2e instead of the default root folder.
The path to the test files in your case should also be updated.
So './e2e/**/*.e2e-spec.ts' will get changed to './**/*.e2e-spec.ts'
Since, currently the test is not able to find any files specified, it doesn't run any instances.

Selenium 'Chrome failed to start: exited abnormally' error

I am following https://github.com/RobCherry/docker-chromedriver/blob/master/Dockerfile as an example and I have the following in my docker file:
RUN CHROMEDRIVER_VERSION=`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE` && \
mkdir -p /opt/chromedriver-$CHROMEDRIVER_VERSION && \
curl -sS -o /tmp/chromedriver_linux64.zip http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip && \
unzip -qq /tmp/chromedriver_linux64.zip -d /opt/chromedriver-$CHROMEDRIVER_VERSION && \
rm /tmp/chromedriver_linux64.zip && \
chmod +x /opt/chromedriver-$CHROMEDRIVER_VERSION/chromedriver && \
ln -fs /opt/chromedriver-$CHROMEDRIVER_VERSION/chromedriver /usr/local/bin/chromedriver
RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list && \
apt-get -yqq update && \
apt-get -yqq install google-chrome-stable && \
rm -rf /var/lib/apt/lists/*
ENV DISPLAY :20.0
ENV SCREEN_GEOMETRY "1440x900x24"
ENV CHROMEDRIVER_PORT 4444
ENV CHROMEDRIVER_WHITELISTED_IPS "127.0.0.1"
ENV CHROMEDRIVER_URL_BASE ''
EXPOSE 4444
To create the driver I am doing:
webdriver.Chrome()
But I get:
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
(Driver info: chromedriver=2.33.506092 (733a02544d189eeb751fe0d7ddca79a0ee28cce4),platform=Linux 4.4.27-boot2docker x86_64)
Do I have to do anything else to allow Chrome to start?
Got it working. The key is to add:
options = webdriver.ChromeOptions()
options.add_argument('--disable-extensions')
options.add_argument('--headless')
options.add_argument('--disable-gpu')
options.add_argument('--no-sandbox')
return webdriver.Chrome(chrome_options=options)
I got it working just by adding -
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.addArguments("--headless");
driver = new ChromeDriver(chromeOptions);