When I write such script in simple Jenkins pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo "${env.HOMEPATH}"
}
}
}
}
I receive this output:
Started by user USER_1
Replayed #36
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in C:\Users\USER_1\AppData\Local\Jenkins\.jenkins\workspace\2nd_pipeline
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] echo
\Users\USER_1
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
However when I write in pipeline script only this:
echo "${env.HOMEPATH}"
I receive only this:
Started by user USER_1
Replayed #37
[Pipeline] Start of Pipeline
[Pipeline] echo
null
[Pipeline] End of Pipeline
Finished: SUCCESS
My question is why when I write only a single line in pipeline script to print env.HOMEPATH I receive null, but when I use pipeline block the variable is printed correctly? HOMEPATH is a variable defined on my the only one node (master) so it has been also injected to Jenkins environment variables.
Related
I have the following Jenkins Pipline
node('api50analysis') {
def err = null
currentBuild.result = "SUCCESS"
try {
dir ('/var/www/TMS/tests') {
stage 'Delete old report'
sh 'rm -Rf ./allure-report'
sh 'rm -Rf ./allure-results'
stage 'Get AutoTest from git'
git credentialsId: '9d6d49c0-9e6c-4e81-8893-cb5993b6fd83', url: '$git_url_test', branch: '$branch_test'
stage 'Install dependencies'
sh 'sudo chown -R $(whoami) /var/www/TMS/tests'
sh 'npm install'
stage 'WEB tests as admin'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite adminSuites'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite aidLimit'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite limit'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite configuration'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite merchant'
sh 'npx wdio run wdio.conf.ts --suite terminal'
sh 'npx wdio run wdio.conf.ts --suite deviceType'
sh 'npm run clearDB'
sh 'npx wdio run wdio.conf.ts --suite device'
sh 'npm run clearDB'
}
}
catch (caughtError) {
err = caughtError
currentBuild.result = "FAILURE"
build job: 'telegram_job',
parameters: [string(name: 'msg',
value: "----------\n"+
"Job name: ${JOB_NAME}\n"+
"Status: $currentBuild.result\n"+
"Build number: ${BUILD_NUMBER}\n"+
"Build URL ${BUILD_URL}\n"+
"----------")]
} finally {
stage 'Allure'
script {allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'target/allure-results']]
])}
if (err) {
throw err
}
}
}
The Allure report is always generated, however, if at least one test has failed, it generates an empty report, instead of generating it with data on which tests passed, which tests failed and attach a screenshot to the place where the test fell (configured in the Webdriver IO config )
What could be the problem? The server where autotests run is managed by CentOS 7 enter image description here. Here is an example of a report when all tests pass enter image description here
In the console in this case
[Pipeline] stage (Allure)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Allure
Proceeding
[Pipeline] script
[Pipeline] {
[Pipeline] allure
[TMS_AUTOTEST] $ /opt/jenkins/tools/ru.yandex.qatools.allure.jenkins.tools.AllureCommandlineInstallation/Allure/bin/allure generate -c -o /opt/jenkins/workspace/TMS_AUTOTEST/allure-report
allure-results does not exist
Report successfully generated to /opt/jenkins/workspace/TMS_AUTOTEST/allure-report
Allure report was successfully generated.
Creating artifact for the build.
Artifact was added to the build.
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 3
Finished: FAILURE
I am running Jenkins on localhost:8080. I have just started working with pipeline jobs. I created a basic jenkinsfile in my script for running selenium with cucumber test and created a declarative pipeline job in Jenkins on macOS
My jenkinsfile looks like this:
pipeline {
agent any
stages {
stage ('Compile Stage') {
steps {
withMaven(maven : 'maven_3_6_3') {
sh 'mvn clean install'
}
}
}
stage ('Testing Stage') {
steps {
withMaven(maven : 'maven_3_6_3') {
sh 'mvn test'
}
}
}
stage ('Cucumber Reports') {
steps {
cucumber buildStatus: "UNSTABLE",
fileIncludePattern: "**/cucumber.json",
jsonReportDirectory: 'target'
}
}
}
}
When the job builds in Jenkins, I get this error:
ERROR: Could not find specified Maven installation 'maven_3_6_3'.
Finished: FAILURE
This is the full log:
Started by user unknown or anonymous
Obtained XeDemo/jenkinsfile from git https://myrepo#bitbucket.org/myrepo/repo.git
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /Users/jo/.jenkins/workspace/DeclarativePipelineDemo
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
The recommended git tool is: git
using credential e2105da7-be79-42ae-a46d-86a636071021
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://myrepo#bitbucket.org/myrepo/repo.git# timeout=10
Fetching upstream changes from https://myrepo#bitbucket.org/myrepo/repo.git
> git --version # timeout=10
> git --version # 'git version 2.24.3 (Apple Git-128)'
using GIT_ASKPASS to set credentials
> git fetch --tags --force --progress -- https://myrepo#bitbucket.org/myrepo/repo.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
Checking out Revision f2c0bef8083d26f1cjo07197940b7d58ce4bdd3 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f f2c0bef8083d26f1cjo07197940b7d58ce4bdd3 # timeout=10
Commit message: "Reformat code"
> git rev-list --no-walk f2c0bef8083d26f1cjo07197940b7d58ce4bdd7 # timeout=10
The recommended git tool is: git
using credential e2105da7-be79-42ae-a46d-86a636071021
> git rev-parse HEAD^{commit} # timeout=10
The recommended git tool is: git
using credential e2105da7-be79-42ae-a46d-86a636071021
[GitCheckoutListener] Recording commits of 'git https://myrepo#bitbucket.org/myrepo/repo.git'
[GitCheckoutListener] Found previous build 'DeclarativePipelineDemo #19' that contains recorded Git commits
[GitCheckoutListener] -> Starting recording of new commits since 'f2c0bef8083d26f1cjo07197940b7d58ce4bdd3’
[GitCheckoutListener] -> Git commit decorator successfully obtained 'hudson.plugins.git.browser.BitbucketWeb#6f02a6f3' to render commit links
[GitCheckoutListener] -> No new commits found
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Compile Stage)
[Pipeline] withMaven
[withMaven] Options: []
[withMaven] Available options:
[withMaven] using JDK installation provided by the build agent
[Pipeline] // withMaven
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Testing Stage)
Stage "Testing Stage" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Cucumber Reports)
Stage "Cucumber Reports" skipped due to earlier failure(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Could not find specified Maven installation 'maven_3_6_3'.
Finished: FAILURE
To try rectify the issue, I have provided maven, java and git paths in the global tool configuration in Jenkins settings, so I don't know why the specified maven installation still cannot be found. I am the only user on the Jenkins app. What am I doing wrong?
So I am running my tests using Jenkins, the Pipeline script is doing all for me but the build is failing at the point where it's saying:
WebDriver::debugWebDriverLogs method has been called when webDriver is not set for each tests and at the end of the log is also saying: [Facebook\WebDriver\Exception\UnknownServerException] Error forwarding the new session Empty pool of VM for setup Capabilities {browserName: chrome}
I have tried in changing this part:
just after I pull:
sh 'docker pull selenium/standalone-chrome'
from this:
'docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-dubnium'
in to:
'docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-palladium'
The pipeline script is this one
node {
stage('Pull latest Docker repo') {
git credentialsId: 'xxxx',
url: 'git#bitbucket.org:my/test-app.git'
}
stage('Install app') {
sh 'docker-compose down'
sh 'docker-compose build --no-cache mysql'
echo "Will deploy from ${branch}";
echo "BuildApp Param ${buildapp}";
if (params.buildapp == true){
sh 'docker-compose build --no-cache --build-arg BRANCH=${branch} apache'
}
sh 'docker-compose up -d'
}
stage('Set up Selenium') {
try {
sh 'docker rm selenium -f'
}
catch(exc) {
echo 'No selenium container running'
}
sh 'docker pull selenium/standalone-chrome'
sh 'docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-dubnium'
}
stage('Load tests') {
dir("../") {}
git credentialsId: 'xxxx',
url: 'git#bitbucket.org:mytests.git'
}
stage('Run tests') {
sh 'composer install'
sh 'vendor/bin/codecept --debug run --steps tests/jenkins'
}
}```
Expected results: for the build to run, as it is running over weekend and on Saturday morning it was OK, passing all the tests, Sunday and Monday morning failing with those errors and since then I am trying to figure out what is happening.
Here is the output in Jenkins:
```Started by user
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/jobs/Autotest/workspace
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Pull latest Docker repo)
[Pipeline] git
using credential xxxx
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url git#my/test-app.git
# timeout=10
Fetching upstream changes from git#bitbucket.org:my/test-app.git
> git --version # timeout=10
using GIT_SSH to set credentials Jenkins Portal Tests Repo
> git fetch --tags --progress git#bitbucket.org:mydocker.git +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 20cdc207dc87e736983d6b28b5a81853c7c37f70 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 20cdc207dc87e736983d6b28b5a81853c7c37f70
> git branch -a -v --no-abbrev # timeout=10
> git branch -D master # timeout=10
> git checkout -b master 20cdc207dc87e736983d6b28b5a81853c7c37f70
Commit message: "Got rid of the WORKDIR"
> git rev-list --no-walk 20cdc207dc87e736983d6b28b5a81853c7c37f70 # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Install app)
[Pipeline] sh
+ docker-compose down
Stopping workspace_apache_1 ...
Stopping workspace_mysql_1 ...
[2A[2K
Stopping workspace_apache_1 ... [32mdone[0m
[2B[1A[2K
Stopping workspace_mysql_1 ... [32mdone[0m
[1BRemoving workspace_apache_1 ...
Removing workspace_mysql_1 ...
[1A[2K
Removing workspace_mysql_1 ... [32mdone[0m
[1B[2A[2K
Removing workspace_apache_1 ... [32mdone[0m
[2BRemoving network workspace_frontend
Removing network workspace_backend
[Pipeline] sh
+ docker-compose build --no-cache mysql
Building mysql
Step 1/2 : FROM mysql:5.7.24
---> ba7a93aae2a8
Step 2/2 : ADD mysql.cnf /etc/mysql/mysql.conf.d/mysql.cnf
---> ba3aa48cc1b5
Successfully built ba3aa48cc1b5
Successfully tagged workspace_mysql:latest
[Pipeline] echo
Will deploy from feature/jenkins-autotests
[Pipeline] echo
BuildApp Param false
[Pipeline] sh
+ docker-compose up -d
Creating network "workspace_frontend" with the default driver
Creating network "workspace_backend" with the default driver
Creating workspace_mysql_1 ...
Creating workspace_mysql_1
[1A[2K
Creating workspace_mysql_1 ... [32mdone[0m
[1BCreating workspace_apache_1 ...
Creating workspace_apache_1
[1A[2K
Creating workspace_apache_1 ... [32mdone[0m
[1B[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Set up Selenium)
[Pipeline] sh
+ docker rm selenium -f
selenium
[Pipeline] sh
+ docker pull selenium/standalone-chrome
Using default tag: latest
latest: Pulling from selenium/standalone-chrome
Digest: sha256:e10140556650edfd1000bba6c58053f4c97c0826ef785858ed1afa6f82284e23
Status: Image is up to date for selenium/standalone-chrome:latest
[Pipeline] sh
+ docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-dubnium
WARNING: Published ports are discarded when using host network mode
3f379a0da28bfcf7f7912a05d757b6115773292c1a426199325f75cfae97988d
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Load tests)
[Pipeline] dir
Running in /var/lib/jenkins/jobs/Autotest
[Pipeline] {
[Pipeline] }
[Pipeline] // dir
[Pipeline] git
using credential xxxx
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url git#bitbucket.org:my/test-app.git
# timeout=10
Fetching upstream changes from git#bitbucket.org:app/portal-tests.git
> git --version # timeout=10
using GIT_SSH to set credentials Jenkins Portal Tests Repo
> git fetch --tags --progress git#bitbucket.org:app/portal-tests.git +refs/heads/*:refs/remotes/origin/*
> git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 1b4abdc36d9ca1339770d92efc5fad172cca3bc7 (refs/remotes/origin/master)
> git config core.sparsecheckout # timeout=10
> git checkout -f 1b4abdc36d9ca1339770d92efc5fad172cca3bc7
> git branch -a -v --no-abbrev # timeout=10
> git branch -D master # timeout=10
> git checkout -b master 1b4abdc36d9ca1339770d92efc5fad172cca3bc7
Commit message: "Commented out the test to not run anymore"
> git rev-list --no-walk 1b4abdc36d9ca1339770d92efc5fad172cca3bc7 # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run tests)
[Pipeline] sh
+ composer install
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Nothing to install or update
Generating autoload files
[Pipeline] sh
+ vendor/bin/codecept --debug run --steps tests/jenkins
Codeception PHP Testing Framework v2.5.2
Powered by PHPUnit 6.5.13 by Sebastian Bergmann and contributors.
Running with seed:
[1mJenkins Tests (48) [22m-----------------------------------------
Modules: [32mWebDriver, \Helper\Acceptance[39m
------------------------------------------------------------
[35;1mtest1:[39;22m test1
Signature: [32mtest1[39m
Test: [32mtests/acceptance/test1.php[39m
[33mScenario --[39m
[36m WebDriver::debugWebDriverLogs method has been called when webDriver is not set[39m
[36m WebDriver::_saveScreenshot method has been called when webDriver is not set[39m
[36m WebDriver::_savePageSource method has been called when webDriver is not set[39m
[36m Screenshot and page source were saved into '/var/lib/jenkins/jobs/Autotest/workspace/tests/_output/' dir[39m
[31;1m ERROR [39;22m
[37;41;1m Test [39;49;22m tests/acceptance/test1.php
[37;41;1m [39;49;22m
[37;41;1m [Facebook\WebDriver\Exception\UnknownServerException] Error forwarding the new session Empty pool of VM for setup Capabilities {browserName: chrome} [39;49;22m
[37;41;1m [39;49;22m
#1 /var/lib/jenkins/jobs/Autotest/workspace/vendor/facebook/webdriver/lib/Exception/WebDriverException.php:114
#2 /var/lib/jenkins/jobs/Autotest/workspace/vendor/facebook/webdriver/lib/Remote/HttpCommandExecutor.php:326
#3 /var/lib/jenkins/jobs/Autotest/workspace/vendor/facebook/webdriver/lib/Remote/RemoteWebDriver.php:126
#4 /var/lib/jenkins/jobs/Autotest/workspace/vendor/symfony/event-dispatcher/EventDispatcher.php:212
#5 /var/lib/jenkins/jobs/Autotest/workspace/vendor/symfony/event-dispatcher/EventDispatcher.php:44```
I think I sorted out:
sudo netstat -tulpn | grep :4444
That is the port that I am using and the result of it was:
tcp6 0 0 120.0.1.1:4444 :::* LISTEN 1739/java
I killed the process by doing:
sudo kill 1739
And now tests are running again.
Thank you for the help.
I think it has something to do with this part:
+ docker run -d -p 4444:4444 --name=selenium --net=host -v /dev/shm:/dev/shm selenium/standalone-chrome:3.141.59-dubnium
WARNING: Published ports are discarded when using host network mode
But the funny thing is that I haven't touch it since the last passed build.
I have Jenkins Declared Pipeline with two separate Docker containers.
And i need to use github commit message and other git data at Post part.
Because of two different agents for two docker containers i have agent none at top of the pipeline.
I use some trick to get GitHub commit message. I don't know how to get it simple way.
environment {
COMMIT_TEXT = sh (
script: 'git log --format="full" -1 ${GIT_COMMIT}',
returnStdout: true
).trim()
TextComment = 'Text from Variable'
}
I can't use environment with sh inside a top level with agent none,
so i need to pass variables to Past part other way.
Any ideas?
Thanks.
pipeline {
agent none
/* --- i can not use "sh" there becouse in can not to be ececuted with 'agent none'
environment {
COMMIT_TEXT = sh (
script: 'git log --format="full" -1 ${GIT_COMMIT}',
returnStdout: true
).trim()
TextComment = 'Text from Variable'
}
*/
stages {
stage('build container up') {
agent {
docker {
image 'container:local'
}
/* --- there is no reason to put variable there, becouse it will be dead with the container before Post processing.
environment {
COMMIT_TEXT = sh (
script: 'git log --format="full" -1 ${GIT_COMMIT}',
returnStdout: true
).trim()
TextComment = 'Text from Variable'
}
*/
}
stages {
stage('Build') {
steps{
git branch: 'testing-jenkinsfile', url: 'https://github.com/...git'
}
}
}
}
stage('Build image') {
steps {
script {
docker.build("walletapi:local")
}
}
}
stage('Run service container'){
agent {label 'master'}
steps {
sh 'docker run -it -d --name container01 \
container01:local'
}
}
}
}
post {
always {
script {
sh 'env'
}
}
}
Initial disclaimer - I'm very new to Jenkins so I don't really get much about it yet. Baby steps would be appreciated very muchly.
I'm in the process of trying to set up a Jenkins job to run through a series of end to end tests I've written for my web app in a headless version of Chrome, as I have witnessed and read that PhantomJS is rather unreliable. I can get them to run absolutely fine locally on my machine, but when I try to run it on Jenkins I get it come back with the below:
The error log
[13:25:24] I/file_manager - creating folder /var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/webdriver-manager/selenium
[13:25:24] I/update - chromedriver: unzipping chromedriver_2.30.zip
[13:25:24] I/update - chromedriver: setting permissions to 0755 for /var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_2.30
[13:25:24] I/launcher - Running 1 instances of WebDriver
[13:25:24] I/direct - Using ChromeDriver directly...
[13:25:25] E/launcher - Server terminated early with status 127
[13:25:25] E/launcher - Error: Server terminated early with status 127
at earlyTermination.catch.e (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/selenium-webdriver/remote/index.js:252:52)
at process._tickCallback (internal/process/next_tick.js:103:7)
From: Task: WebDriver.createSession()
at Function.createSession (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/selenium-webdriver/lib/webdriver.js:777:24)
at Function.createSession (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/selenium-webdriver/chrome.js:709:29)
at Direct.getNewDriver (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/lib/driverProviders/direct.ts:90:25)
at Runner.createBrowser (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/lib/runner.ts:225:39)
at q.then.then (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/lib/runner.ts:391:27)
at _fulfilled (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:834:54)
at self.promiseDispatch.done (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:863:30)
at Promise.promise.promiseDispatch (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:796:13)
at /var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:556:49
at runSingle (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:137:13)
at flush (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:125:13)
at _combinedTickCallback (internal/process/next_tick.js:67:7)
at process._tickCallback (internal/process/next_tick.js:98:9)
[13:25:25] E/launcher - Process exited with error code 199
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[Bitbucket] Notifying commit build result
[Bitbucket] Build result notified
ERROR: script returned exit code 199
Finished: FAILURE
I've been stumbling around in the dark with these for a day or so now and have tried almost 100 different combinations of what the internet says should fix it, but to no avail..
My Jenkinsfile
properties([[$class: 'jenkins.model.BuildDiscarderProperty',
strategy: [$class: 'LogRotator', numToKeepStr: '30', artifactNumToKeepStr: '1']]])
node('web-app-build') {
try {
stage('Clean workspace') {
deleteDir()
}
stage('Install NodeJS and NPM') {
def nodeHome = tool name: 'NodeJS 7.2.0', type: 'jenkins.plugins.nodejs.tools.NodeJSInstallation'
env.PATH = "${nodeHome}/bin:${env.PATH}"
}
stage('Checkout') {
checkout scm
}
stage('Install Dependencies') {
sh "npm install --silent"
sh "npm install -g #angular/cli#latest"
sh "ng set --global warnings.versionMismatch=false"
sh "npm install protractor -g"
}
stage('Run E2E tests') {
sh "webdriver-manager update --versions.chrome=2.30 --gecko=false"
sh "ng e2e"
}
stage('Publish Reports') {
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: 'coverage', reportFiles: 'index.html', reportName: 'HTML Report'])
step([$class: 'JUnitResultArchiver', testResults: '**/junit/junit.xml'])
step([
$class: 'CloverPublisher',
cloverReportDir: 'coverage',
cloverReportFileName: 'clover.xml',
healthyTarget: [methodCoverage: 70, conditionalCoverage: 80, statementCoverage: 80],
unhealthyTarget: [methodCoverage: 50, conditionalCoverage: 50, statementCoverage: 50],
failingTarget: [methodCoverage: 0, conditionalCoverage: 0, statementCoverage: 0]
])
}
if (env.BRANCH_NAME == 'develop') {
stage('Create env=staging build ready for S3') {
sh "ng build --env=staging --output-hashing=all"
}
stage('Deploy Build to S3 -----------------') {
env.AWS_ACCESS_KEY_ID = '------------------------'
env.AWS_SECRET_ACCESS_KEY = '------------------------'
sh "npm install s3-deploy -g"
sh "s3-deploy 'dist/**' --cwd './dist' --region 'us-west-2' --bucket '-----------------' --cache 60 --etag"
}
stage('Create env=ci build ready for S3') {
sh "ng build --env=ci --output-hashing=all"
}
stage('Deploy Build to S3 ---------------') {
env.AWS_ACCESS_KEY_ID = '--------------------'
env.AWS_SECRET_ACCESS_KEY = '---------------------'
sh "npm install s3-deploy -g"
sh "s3-deploy 'dist/**' --cwd './dist' --region 'us-west-2' --bucket '-----------------------' --cache 60 --etag"
}
stage('Create env=e2e build ready for S3 ------------------') {
sh "ng build --env=e2e --output-hashing=all"
}
stage('Deploy Build to S3 ------------------') {
env.AWS_ACCESS_KEY_ID = '-----------------------'
env.AWS_SECRET_ACCESS_KEY = '--------------------------'
sh "npm install s3-deploy -g"
sh "s3-deploy 'dist/**' --cwd './dist' --region 'us-west-2' --bucket '-------------------------' --cache 60 --etag"
}
stage('Run "Web App e2e" Tests') {
build job: '../Web App e2e', wait: false
}
}
} catch (e) {
throw e
}
}
protractor.conf.js
/*global jasmine */
var SpecReporter = require('jasmine-spec-reporter').SpecReporter;
exports.config = {
allScriptsTimeout: 11000,
specs: [
'./e2e/**/*.e2e-spec.ts'
],
capabilities: {
'browserName': 'chrome',
'chromeOptions': {
args: ['--headless', 'no-sandbox', '--disable-gpu', '--window-size=800x600']
}
},
directConnect: true,
framework: 'jasmine',
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 30000,
print: function () {
}
},
useAllAngular2AppRoots: true,
beforeLaunch: function () {
require('ts-node').register({
project: 'e2e'
});
},
onPrepare: function () {
jasmine.getEnv().addReporter(new SpecReporter());
}
};
If someone could explain why this is happening, and how to stop it from doing so, that would be much appreciated.
Thanks in advance!
EDIT
This is the output when I don't connect directly
[15:03:50] I/file_manager - creating folder /var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/webdriver-manager/selenium
[15:03:51] I/update - chromedriver: unzipping chromedriver_2.30.zip
[15:03:51] I/update - chromedriver: setting permissions to 0755 for /var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_2.30
[15:03:51] I/launcher - Running 1 instances of WebDriver
[15:03:51] E/local - Error code: 135
[15:03:51] E/local - Error message: No update-config.json found. Run 'webdriver-manager update' to download binaries.
[15:03:51] E/local - Error: No update-config.json found. Run 'webdriver-manager update' to download binaries.
at Local.addDefaultBinaryLocs_ (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/lib/driverProviders/local.ts:47:15)
at Local.setupDriverEnv (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/lib/driverProviders/local.ts:98:10)
at Local.setupEnv (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/lib/driverProviders/driverProvider.ts:124:30)
at q.then (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/lib/runner.ts:387:39)
at _fulfilled (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:834:54)
at self.promiseDispatch.done (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:863:30)
at Promise.promise.promiseDispatch (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:796:13)
at /var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:857:14
at runSingle (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:137:13)
at flush (/var/jenkins/workspace/Web_App_feature_e2e-chrome-YUH5PYKKXHHXSQT3HLIZ74DQIBAH3D5D6WZCOLDVI2LYG4RGOVBQ/node_modules/protractor/node_modules/q/q.js:125:13)
at _combinedTickCallback (internal/process/next_tick.js:67:7)
at process._tickCallback (internal/process/next_tick.js:98:9)
[15:03:51] E/launcher - Process exited with error code 135
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[Bitbucket] Notifying commit build result
[Bitbucket] Build result notified
ERROR: script returned exit code 135
Finished: FAILURE