PhantomJS timeout issue when running in headless mode in GitLab CI - phantomjs

I am trying to use GitLab CI to run some client-side unit test written using QUnit. Now to run the Qunit test I am using the grunt-contrib-qunit plugin. To run these tests in headless mode I am using this plugin which hosts it on a localhost server in a console and runs all unit tests. When running this project locally I am successfully able to run all the unit tests but when I checking in my code which kicks of the CI process, on GitLab, it fails on starting the phantomjs server and gives timeout error. I am also providing the jsbin link of the two text files which are basically the output of the unit test from my console. One file is of my local system and another is from the GitLab CI that runs on GitLab website when I check-in my code.
Local Console Output File Dump
Gitlab CI Output Dump
Adding my gitlab-ci.yaml file
image: node:4.2.2
before_script:
- dir
- cd sapui5_repo
- dir
- cd app-with-tests
build:
stage: build
script:
- npm i
- npm run test
cache:
policy: push
paths:
- node_modules
artifacts:
paths:
- built
Also adding my gruntfile if that helps
/* global module */
module.exports = function (grunt) {
grunt.initConfig({
qunit: {
all: {
options: {
timeout: 9000,
urls: [
"http://localhost:9000/webcontent/test/unit/unitTests.qunit.html"
]
}
},
//all: ["webcontent/test/unit/unitTests.qunit.html"],
options: {
timeout: 2000,
}
},
connect: {
options: {
//open: true,
},
first: {
options: {
port: 9000,
//livereload: 3500,
base: "./"
}
},
second: {
options: {
open: {
target: "http://localhost:9000/webcontent"
},
keepalive: true,
port: 9000,
livereload: 3501,
base: "./",
}
}
},
});
grunt.loadNpmTasks("grunt-contrib-connect");
grunt.loadNpmTasks("grunt-contrib-qunit");
grunt.registerTask("test", [
"connect:first", "qunit"
]);
grunt.registerTask("default", [
"connect:second"
]);
};

Related

How to run Playwright in headless mode?

I created a new Vue app using npm init vue#latest and selected Playwright for e2e tests. I removed firefox and webkit from projects in the playwright.config.ts file, so it will only use chromium.
Running npm run test:e2e works fine, the process exists with a success code.
When forcing the tests to fail by modifying the ./e2e/vue.spec.ts file the output is
but the process does not exit with an error code, it still opened browser windows and so CI environments would freeze.
I searched the docs for a specific flag e.g. "headless" and tried --max-failures -x but that didn't help.
How can I tell Playwright to run in headless mode and exit with an error code when something failed?
Since playwright.config.ts already makes use of process.env.CI I thought about replacing reporter: "html", with reporter: [["html", { open: !process.env.CI ? "on-failure" : "never" }]],
but which arguments should I add to the script "test:e2e:ci": "playwright test", to ensure process.env.CI is set?
Update
I tried to run the script inside my CI environment and it seems to work out of the box ( I don't know how it sets the CI environment flag but the pipeline did not freeze )
- name: Install Playwright Browsers
run: npx playwright install --with-deps
- name: Check if e2e tests are passing
run: npm run test:e2e
If any test fails it exists with an error code
It's serving the html report and asking to press 'Ctr+C' to quite.You can disable it using below configuration.
// playwright.config.ts
import { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
reporter: [ ['html', { open: 'never' }] ],
};
export default config;
Refer - Report Doc
Issue - https://github.com/microsoft/playwright/issues/9702
To add to the answer above, you can set headless: true in the 'use' block of the config which is above the projects block. Anything set at that level will apply to all projects unless you specifically override the setting inside a project specific area:
// playwright.config.ts
import { PlaywrightTestConfig } from '#playwright/test';
const config: PlaywrightTestConfig = {
reporter: [ ['html', { open: 'never' }] ],
use: {
headless: true,
},
projects: [
{
name: 'chromium',
use: {
browserName: 'chromium',
},
},
},
};
export default config;

Can't instrument code when serving storybook production build

I'm having a storybook with vue3 and vite. I want to measure my code coverage via istanbul when I run playwright tests.
Therefore I configured my storybook vite under .storybook/main.ts as follows:
const config: StorybookViteConfig = {
....
typescript: {
check: false,
checkOptions: {},
},
framework: '#storybook/vue3',
core: {
builder: '#storybook/builder-vite',
},
...
async viteFinal(config, { configType }) {
return mergeConfig(config, {
plugins: [
istanbul({
include: 'src/*',
exclude: ['node_modules', 'test/'],
extension: ['.js', '.ts', '.vue'],
}),
],
....
})
},
}
export default config
When I run storybook in dev mode with start-storybook -p 6006 and execute my playwright tests afterwards, the code is instrumented (coverage is not null) and a code coverage is measured.
However, when I build storybook and start the static build afterwards with these commands: build-storybook && http-server storybook-static --port 6006, the website works fine, but the coverage variable doesn't exist and no code coverage is measured, when I run playwright tests there.
I want to measure my code coverage in the ci used the built storybook for that (see https://storybook.js.org/docs/react/writing-tests/test-runner#run-against-non-deployed-storybooks). Or is there any other way to run playwright tests and measure code coverage in the ci?

How to set correct Selenium host for nightwatch e2e test on gitlab?

I would like to add some e2e tests for my vue.js application and run them in the pipeline.
The corresponding part in my gitlab-ci.yml looks like this:
e2e:
image: node:8
before_script:
- npm install
services:
- name: selenium/standalone-chrome
alias: chrome
stage: testing
script:
- cd online-leasing-frontend
- npm install
- npm run test:e2e
And my nightwatch.js config:
{
"selenium": {
"start_process": false
},
"test_settings": {
"default": {
"selenium_port": 4444,
"selenium_host": "chrome"
}
}
}
Is “selenium_host”: “chrome” the correct way of setting the host to the selenium service?
I get the following error indicating that my e2e test can’t connect to the selenium service:
Connection refused! Is selenium server started?
Any tips?
The problem was that according to this issue, Gitlab CI is using the Kubernetes Executor instead of the Docker Executor which is mapping all Services to 127.0.0.1. After setting the selenium_host to this address, everything worked.
{
"selenium": {
"start_process": false
},
"test_settings": {
"default": {
"selenium_port": 4444,
"selenium_host": "127.0.0.1",
}
}
}
On the Selenium Repo it says:
"When executing docker run for an image with Chrome or Firefox please either mount -v /dev/shm:/dev/shm or use the flag --shm-size=2g to use the host's shared memory."
I don't know gitlab-ci so well, but I'm afraid it is not possible to add this as parameter to a service.

Protractor pointing to Sauce Labs Selenium Server

I'm trying to integrate Protractor with Sauce Labs from Travis. I can get the sauce_connect server running correctly but am unable to get Travis to point to that particular remote server.
Travis will get to the point where it initiates sauce_connect but when I run "protractor:analytics" it doesn't point to the correct server and fails.
Travis.yml:
language: python
python:
- 3.2_with_system_site_packages
branches:
only:
- develop
before_install:
- sudo apt-get update -qq
- sudo apt-get install python-numpy
install:
- cd lib && python setup.py install
- cd .. && pip install -r requirements/travis_requirements.txt
- npm install
script:
- grunt karma:single
- grunt protractor:analytics
env:
global:
- secure: <string>
- secure: <string>
sauce_connect: true
Gruntfile:
protractor: {
options: {
configFile: './webapp/static/test/e2e/protractor.conf.js',
keepAlive: true
},
singlerun: {},
analytics: {
options: {
//debug : true,
args:{
specs: ['./webapp/static/test/e2e/analytics_spec.js']
}
}
},
},
Protractor Conf:
exports.config = {
chromeOnly: false,
seleniumArgs: [],
// If sauceUser and sauceKey are specified, seleniumServerJar will be ignored.
// The tests will be run remotely using SauceLabs.
sauceUser: process.env.SAUCE_USER,
sauceKey: process.env.SAUCE_KEY,
baseUrl: 'http://localhost:8000',
specs: [
'./*_spec.js',
],
// Patterns to exclude.
exclude: [],
multiCapabilities: [],
// ----- More information for your tests ----
//
// A base URL for your application under test. Calls to protractor.get()
// with relative paths will be prepended with this.
baseUrl: process.env.SN_BASE_URL,
// Selector for the element housing the angular app - this defaults to
// body, but is necessary if ng-app is on a descendant of <body>
rootElement: 'body',
// A callback function called once protractor is ready and available, and
// before the specs are executed
// You can specify a file containing code to run by setting onPrepare to
// the filename string.
onPrepare: function() {
// At this point, global 'protractor' object will be set up, and jasmine
// will be available. For example, you can add a Jasmine reporter with:
// jasmine.getEnv().addReporter(new jasmine.JUnitXmlReporter(
// 'outputdir/', true, true));
},
// The params object will be passed directly to the protractor instance,
// and can be accessed from your test. It is an arbitrary object and can
// contain anything you may need in your test.
// This can be changed via the command line as:
// --params.login.user 'Joe'
params: {
login: {
user: process.env.SN_TEST_USERNAME,
password: process.env.SN_TEST_PASSWORD
}
},
framework: 'jasmine',
// ----- Options to be passed to minijasminenode -----
//
// See the full list at https://github.com/juliemr/minijasminenode
jasmineNodeOpts: {
// onComplete will be called just before the driver quits.
onComplete: null,
// If true, display spec names.
isVerbose: false,
// If true, print colors to the terminal.
showColors: true,
// If true, include stack traces in failures.
includeStackTrace: true,
// Default time to wait in ms before a test fails.
defaultTimeoutInterval: 30000
},
onCleanUp: function() {}
};
If I did understand well : Sauce connect tool is not used by protractor/selenium when running a test suite.
Well I had this problem, travis requires sauce credentials and protractor requires those credentials and a tunnel id:
.travis.yml:
addons:
sauce_connect:
username: xxx
access_key: xxx
protractor.conf.js:
exports.config = {
...
sauceUser: process.env.SAUCE_USERNAME,
sauceKey: process.env.SAUCE_ACCESS_KEY,
capabilities: {
...
'tunnel-identifier': process.env.TRAVIS_JOB_NUMBER,
}
}

Grunt watch: only upload files that have changed

related
I was able to set up a Grunt task to SFTP files up to my dev server using grunt-ssh:
sftp: {
dev: {
files: {
'./': ['**','!{node_modules,artifacts,sql,logs}/**'],
},
options: {
path: '/path/to/project',
privateKey: grunt.file.read(process.env.HOME+'/.ssh/id_rsa'),
host: '111.111.111.111',
port: 22,
username: 'marksthebest',
}
}
},
But this uploads everything when I run it. There are thousands of files. I don't have time to wait for them to upload one-by-one every time I modify a file.
How can I set up a watch to upload only the files I've changed, as soon as I've changed them?
(For the curious, the server is a VM on the local network. It runs on a different OS and the setup is more similar to production than my local machine. Uploads should be lightning quick if I can get this working correctly)
What you need is grunt-newer, a task designed especially to update the configuration of any task depending on what file just changed, then run it. An example configuration could look like the following:
watch: {
all: {
files: ['**','!{node_modules,artifacts,sql,logs}/**'],
tasks: ['newer:sftp:dev']
}
}
You can do that using the watch event of grunt-contrib-watch.
You basically need to handle the watch event, modify the sftp files config to only include the changed files, and then let grunt run the sftp task.
Something like this:
module.exports = function(grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
secret: grunt.file.readJSON('secret.json'),
watch: {
test: {
files: 'files/**/*',
tasks: 'sftp',
options: {
spawn: false
}
}
},
sftp: {
test: {
files: {
"./": "files/**/*"
},
options: {
path: '/path/on/the/server/',
srcBasePath: 'files/',
host: 'hostname.com',
username: '<%= secret.username %>',
password: '<%= secret.password %>',
showProgress: true
}
}
}
}); // end grunt.initConfig
// on watch events configure sftp.test.files to only run on changed file
grunt.event.on('watch', function(action, filepath) {
grunt.config('sftp.test.files', {"./": filepath});
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-ssh');
};
Note the "spawn: false" option, and the way you need to set the config inside the event handler.
Note2: this code will upload one file at a time, there's a more robust method in the same link.
You can achieve that with Grunt:
grunt-contrib-watch
grunt-rsync
First things first: I am using a Docker Container. I also added a public SSH key into my Docker Container. So I am uploading into my "remote" container only the files that have changed in my local environment with this Grunt Task:
'use strict';
module.exports = function(grunt) {
grunt.initConfig({
rsync: {
options: {
args: ['-avz', '--verbose', '--delete'],
exclude: ['.git*', 'cache', 'log'],
recursive: true
},
development: {
options: {
src: './',
dest: '/var/www/development',
host: 'root#www.localhost.com',
port: 2222
}
}
},
sshexec: {
development: {
command: 'chown -R www-data:www-data /var/www/development',
options: {
host: 'www.localhost.com',
username: 'root',
port: 2222,
privateKey: grunt.file.read("/Users/YOUR_USER/.ssh/id_containers_rsa")
}
}
},
watch: {
development: {
files: [
'node_modules',
'package.json',
'Gruntfile.js',
'.gitignore',
'.htaccess',
'README.md',
'config/*',
'modules/*',
'themes/*',
'!cache/*',
'!log/*'
],
tasks: ['rsync:development', 'sshexec:development']
}
},
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-rsync');
grunt.loadNpmTasks('grunt-ssh');
grunt.registerTask('default', ['watch:development']);
};
Good Luck and Happy Hacking!
I have recently ran into a similar issue where I wanted to only upload files that have changed. I'm only using grunt-exec. Providing you have ssh access to your server, you can do this task with much greater efficiency. I also created an rsync.json that is ignored by git, so collaborators can have their own rsync data.
The benefit is that if anyone makes a change it automatically uploads to their stage.
// Watch - runs tasks when any changes are detected.
watch: {
scripts: {
files: '**/*',
tasks: ['deploy'],
options: {
spawn: false
}
}
}
My deploy task is a registered task that compiles scripts then runs exec:deploy
// Showing exec:deploy task
// Using rsync with ssh keys instead of login/pass
exec: {
deploy: {
cmd: 'rsync public_html/* <%= rsync.options %> <%= rsync.user %>#<%= rsync.host %>:<%=rsync.path %>'
}
You see a lot of the <%= rsync %> stuff? I use that to grab info from rysnc.json which is ingored by git. I only have this because this is a team workflow.
// rsync.json
{
"options": "-rvp --progress -a --delete -e 'ssh -q'",
"user": "mmcfarland",
"host": "example.com",
"path": "~/stage/public_html"
}
Make sure you rsync.json is defined in grunt:
module.exports = function(grunt) {
var rsync = grunt.file.readJSON('path/to/rsync.json');
var pkg = grunt.file.readJSON('path/to/package.json');
grunt.initConfig({
pkg: pkg,
rsync: rsync,
I think it's not good idea to upload everything that changed at once to staging server. And working on the staging server is not a good idea too. You have to configure your local machine server, to be the same as staging/production
It's better to upload 1 time, when you do deployment.
You can archive all the files using grunt-contrib-compress. And push them using grunt-ssh as 1 file, then extract it on the server, that will be much faster.
that's example of compress task:
compress: {
main: {
options:{
archive:'build/build.tar.gz',
mode: 'tgz'
},
files: [
{cwd: 'build/', src: ['sites/all/modules/**'], dest:'./'},
{cwd: 'build/', src: ['sites/all/themes/**'], dest:'./'},
{cwd: 'build/', src: ['sites/default/files/**'], dest:'./'}
]
}
}
PS: Didn't ever look to rsync grunt modules.
I understand that it's might not what you are looking for. But i decided to create my answer as standalone answer.