Web Component Tester- Not finding chrome locally - testing

Title saids it all:
one#localhost ~/github/my-el $ polymer test -l chrome
step: loadPlugins
step: configure
hook: configure
Error: The following browsers were not found: chrome. (All installed browsers found: )
one#localhost ~/github/my-el $ which chrome
/usr/bin/chrome
one#localhost ~/github/my-el $ lr /usr/bin/chrome
lrwxrwxrwx 1 root root 29 Jul 8 21:25 /usr/bin/chrome -> /usr/bin/google-chrome-stable
Reference https://github.com/SeleniumHQ/selenium/wiki/DesiredCapabilities and
https://github.com/Polymer/web-component-tester/blob/master/runner/config.ts#L127
one#localhost ~/github/my-el $ cat wct.conf.json
{
"verbose": true,
"plugins": {
"local": {
"browsers": ["chrome"],
"browserOptions": {
"browserName": "/usr/bin/chrome",
"platform": "LINUX"
}
}
}
}

I found the answer here https://github.com/Polymer/web-component-tester/issues/222
place this in the shell for good times export LAUNCHPAD_CHROME=/usr/bin/google-chrome-stable
I was going to delete this post but I will leave it up for easy reference for other good souls searching for answer.

Related

Selenium 4.x trying to POST CDP: "UnsupportedCommandException"

I`m trying to execute some commands via CDP, however no matter what combination of Selenium/Driver/Chrome I use it's always the same result.
Last tested with:
Selenium 4.1.1
Chrome + Driver 96.0.4664.110
The project is made in C so I am posting manually to Selenium via CURL. Every other command besides CDP works fine.
I have checked Selenium, Chrome Driver; they both have the CDP support built in.
The URL's I tried to post to are:
- /session/id/goog/cdp/execute
- /session/id/{}/cdp/execute
The posted data format is: "cmd" + "params" (json object).
Both end in the same result: org.openqa.selenium.UnsupportedCommandException.
I also tried to run Selenium in different modes, standalone, hub/node, same result.
Can someone please advise what I am doing wrong? Or maybe I have misunderstood the usage?
Using chromedriver executable
This worked for me (Windows + Postman), but should also work with CURL Linux/Mac.
1 Download chromedriver: https://chromedriver.chromium.org/downloads for your chrome version.
2 Start chromedriver
start chromedriver.exe
output:
Starting ChromeDriver 97.0.4692.71 on port 9515...
3 Send requests to localhost:9515/
3.1 Create Session:
POST localhost:9515/session
request json body:
{"capabilities":{"goog:chromeOptions": {}}}
status 200
response:
"value": {
"capabilities": {
...
},
"sessionId": "b8ac49ce2203739fa0d32dfe8d1a23b5"
3.2 Navigate some url (optional, just check request by sessionId works):
POST localhost:9515/session/b8ac49ce2203739fa0d32dfe8d1a23b5/url
request json body:
{"url": "https://example.com"}
status 200
3.3 Execute CDP command (take screenshot):
POST localhost:9515/session/b8ac49ce2203739fa0d32dfe8d1a23b5/goog/cdp/execute
request json body:
{"cmd":"Page.captureScreenshot", "params":{}}
status 200
response:
{
"value": {
"data": "iVBORw0KGgoAAAANSUhEUgA...."
}
}
Allow remote connections
By default chromedriver allows only local connections.
To allow some remote IPs:
start chromedriver.exe --allowed-ips="some-remote-ip"
Reference: https://sites.google.com/a/chromium.org/chromedriver/security-considerations
Run CDP commands with Selenium Grid
Eventually, it started to work for me with
ChromeDriver 97.0.4692.71
selenium-server-4.1.1
Chrome 97.0.4692.71 (Official Build) (64-bit)
Note: Content-Type header should have charset=utf-8
Content-Type:application/json;charset=utf-8 for Selenium Grid HTTP requests.
Prerequisites
1 Download and run selenium server according to
https://www.selenium.dev/documentation/grid/getting_started/
java -jar selenium-server-<version>.jar standalone --driver-configuration display-name='Chrome' stereotype='{"browserName":"chrome"}'
2 Create Session:
POST localhost:4444/wd/hub/session
request json body:
{
"desiredCapabilities": {
"browserName": "chrome",
"goog:chromeOptions": {
"args": [
],
"extensions": [
]
}
},
"capabilities": {
"firstMatch": [
{
"browserName": "chrome",
"goog:chromeOptions": {
"args": [
],
"extensions": [
]
}
}
]
}
}
status 200
response:
{
"status": 0,
"sessionId": "69ac1c82306f72c7aaf53cfbb28a30e7",
...
}
}
3 Execute CDP command (take screenshot):
POST localhost:4444/wd/hub/session/69ac1c82306f72c7aaf53cfbb28a30e7/goog/cdp/execute
request json body:
{"cmd":"Page.captureScreenshot", "params":{}}
status 200
response:
{
"value": {
"data": "iVBORw0KGgoAAAANSUhEUgA...."
}
}

"ChromeHeadless have not captured in 60000 ms, killing." occuring only in Gitlab hosted CI/CD pipeline

When running a CI/CD pipeline on Gitlab, my Karma tests are timing out with the error:
ℹ 「wdm」: Compiled successfully.
05 08 2019 22:25:31.483:INFO [karma-server]: Karma v4.2.0 server started at http://0.0.0.0:9222/
05 08 2019 22:25:31.485:INFO [launcher]: Launching browsers ChromeHeadlessNoSandbox with concurrency 1
05 08 2019 22:25:31.488:INFO [launcher]: Starting browser ChromeHeadless
05 08 2019 22:26:31.506:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
05 08 2019 22:26:31.529:INFO [launcher]: Trying to start ChromeHeadless again (1/2).
05 08 2019 22:27:31.580:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
05 08 2019 22:27:31.600:INFO [launcher]: Trying to start ChromeHeadless again (2/2).
05 08 2019 22:28:31.659:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
05 08 2019 22:28:31.689:ERROR [launcher]: ChromeHeadless failed 2 times (timeout). Giving up.
npm ERR! Test failed. See above for more details.
This problem does not occur when running tests locally, and it does not occur when running the tests using the same Docker image with Gitlab Runner locally.
I feel like I have tried every possible configuration with karma.conf.js. I have Googled this issue relentlessly and have tried every suggestion from proxy servers, to environment variables, to flags... but alas, no luck. I have tried multiple Docker images as this was initially failing on local Gitlab Runner but I have found that the Docker image selenium/standalone-chrome:latest works fine in local Gitlab Runner.
Here is my karma.conf.js file:
const process = require('process');
process.env.CHROME_BIN = require('puppeteer').executablePath();
module.exports = function(config) {
config.set({
// base path that will be used to resolve all patterns (eg. files, exclude)
basePath: '',
// frameworks to use
frameworks: [ 'jasmine' ],
// list of files / patterns to load in the browser
files: [
'src/**/*.spec.js'
],
// list of files / patterns to exclude
exclude: [],
// preprocess matching files before serving them to the browser
preprocessors: {
'src/**/*.spec.js': [ 'webpack' ]
},
webpack: {
// webpack configuration
mode: 'development',
module: {
rules: [
{
test: /\.js$/,
loader: 'babel-loader',
exclude: /node_modules/,
query: {
presets: ['env']
}
}
]
},
stats: {
colors: true
}
},
// test results reporter to use
reporters: [ 'spec' ],
// web server port
port: 9222,
// enable / disable colors in the output (reporters and logs)
colors: true,
// level of logging
// possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
logLevel: config.LOG_INFO,
// enable / disable watching file and executing tests whenever any file changes
autoWatch: true,
// plugins for karma
plugins: [
'karma-chrome-launcher',
'karma-webpack',
'karma-jasmine',
'karma-spec-reporter'
],
// start these browsers
browsers: ['ChromeHeadlessNoSandbox'],
customLaunchers: {
ChromeHeadlessNoSandbox: {
base: 'ChromeHeadless',
flags: [
'--headless',
'--no-sandbox',
'--disable-gpu'
]
}
},
captureTimeout: 60000,
browserDisconnectTolerance: 5,
browserDisconnectTimeout : 30000,
browserNoActivityTimeout : 30000,
// Continuous Integration mode
// if true, Karma captures browsers, runs the tests and exits
singleRun: true,
// Concurrency level
// how many browser should be started simultaneous
concurrency: 1
})
}
And here is my .gitlab-ci.yml file:
.prereq_scripts: &prereq_scripts |
sudo apt -y update && sudo curl -sL https://deb.nodesource.com/setup_10.x | sudo bash && sudo apt -y install nodejs
image: 'selenium/standalone-chrome:latest'
stages:
- test
test:
stage: test
script:
- *prereq_scripts
- npm install
- npm test
I am expecting the tests to run successfully in all three instances (local npm, local Gitlab Runner and remote Gitlab CI/CD pipeline). Currently it only runs in successfully in the first two.
In your karma.conf.js file you need to declare the CHROME_BIN variable inside the module.exports function:
module.exports = function(config) {
const process = require('process');
process.env.CHROME_BIN = require('puppeteer').executablePath();
config.set({
...
Currently, Puppeteer has an issue with Karma on Linux machines, see GitHub issue
There are plenty of solutions on how to make it works without Puppeteer if you use it just to install Headless Chromium.
I have installed it on my Jenkins Alpine machine using only two bash lines:
apk add chromium
export CHROME_BIN=/usr/bin/chromium-browser
Alternatively, you can use Docker with the same setup. One of the examples is here
Docker image with chromeheadless
for example, use a docker image of angular/ngcontainer with chrome headless for testing UI apps.
image: 'angular/ngcontainer:latest'
Also, I created one docker image with the latest chrome
image: 'anulals/angular'
https://hub.docker.com/r/angular/ngcontainer
https://hub.docker.com/r/anulals/angular

My nightwatch.js tests not runs in Chrome headless of CentOS

I run nightwatch.js tests using Nightwatch version 1.0.18 and It's working in windows environment but when I run it in centOS after installment of Xvfb I found below error.
Error while running .navigateTo() protocol action: invalid session id
Error while running .locateMultipleElements() protocol action: invalid session id
Error while running .locateMultipleElements() protocol action: invalid session id
Here is my nightwatch.json file code:
{
"src_folders": [
"./tests"
],
"output_folder": "./reports",
"custom_commands_path": "./custom_commands",
"custom_assertions_path": "",
"test_workers": false,
"webdriver": {
"start_process": true
},
"test_settings": {
"default": {
"webdriver": {
"port": 9515,
"server_path": "./node_modules/chromedriver/lib/chromedriver/chromedriver",
"cli_args": [
"--log",
"debug"
]
},
"skip_testcases_on_fail": true,
"desiredCapabilities": {
"browserName": "chrome",
"javascriptEnabled": true,
"acceptSslCerts": true,
"chromeOptions": {
"args": [
"headless",
"no-sandbox",
"disable-gpu"
]
}
}
}
}
}
am I missing something to run my tests in the centOS environment because it is running in the windows environment?
I had the same issue with Nightwatchjs and the npm chomedriver setup.
Background:
Everything was working until I just recently updated Chromium on my system. In addition to the errors in the original post, verbose logging also showed:
{
message: 'unknown error: Chrome failed to start: exited abnormally',
error: [
"(unknown error: DevToolsActivePort file doesn't exist)",
'(The process started from chrome location /usr/bin/chromium is no longer running, so ChromeDriver is assuming that Chrome has crashed.)',
'(Driver info: chromedriver=2.46.628388 (4a34a70827ac54148e092aafb70504c4ea7ae926),platform=Linux 4.9.0-8-amd64 x86_64)'
],
}
After downloading the standalone chromedriver (2.46.628388) to match my Chromium version (72.0.3626.69) it was still showing the same errors.
Solution:
I ended up downloading an older version of Chromium (71.0.3578.127) and setting chromeOptions.binary to the new path of the chromium 71 binary. I also had to include 'no-sandbox' with chromeOptions.args.
Here is the snippet from the site mentioned above:
Downloading old builds of Chrome / Chromium
Let's say you want a build of Chrome 44 for debugging purposes. Google does not offer old builds as they do not have up-to-date security fixes.
However, you can get a build of Chromium 44.x which should mostly match the stable release. Here's how you find it:
Look in https://googlechromereleases.blogspot.com/search/label/Stable%20updates for the last time "44." was mentioned.
Loop up that version history ("44.0.2403.157") in the Position Lookup
In this case it returns a base position of "330231". This is the commit of where the 44 release was branched, back in May 2015.*
Open the continuous builds archive
Click through on your platform (Linux/Mac/Win)
Paste "330231" into the filter field at the top and wait for all the results to XHR in.
Eventually I get a perfect hit: https://commondatastorage.googleapis.com/chromium-browser-snapshots/index.html?prefix=Mac/330231/
Sometimes you may have to decrement the commit number until you find one.
Download and run!
Upgrading to the latest version of chromedriver solved the issue for me. You can find the latest version here; https://www.npmjs.com/package/chromedriver
In my situation, when that error occurs:
Error while running .navigateTo() protocol action: invalid session id
I added the following code into .travis.yml:
addons:
chrome: stable

Node launcher for testem using jasmine framework

I am testing nodejs backend code with jasmine and trying to set up testem.
My testem.json:
{
"framework": "jasmine",
"launchers": {
"Node": {
"command": "jasmine"
}
},
"launch_in_dev": [
"Node"
]
}
When i run testem i see proper jasmine output, no problems here, it says:
Started
....
4 specs, 0 failures
Finished in 0.024 seconds
But seems like jasmine reporter doesn't report testem about total/passed/failed.
If just shows:
How do i fix that?

protractor could not find protractor/selenium/chromedriver.exe at codeship

i'm trying to configure the integration to run portractor tests.
I'm using grunt-protractor-runner task
with following configuration:
protractor: {
options: {
configFile: "protractor.conf.js", //your protractor config file
keepAlive: true, // If false, the grunt process stops when the test fails.
noColor: false, // If true, protractor will not use colors in its output.
args: {
// Arguments passed to the command
}
},
run: {},
chrome: {
options: {
args: {
browser: "chrome"
}
}
}
}
and here is grunt task which i use for running the protractor after the server is running:
grunt.registerTask('prot', [
'connect:test',
'replace:includemocks',//for uncommenting angular-mocks reference
'protractor:run',
'replace:removemocks',//for commenting out angular-mocks reference
]);
It is running well on my local machine, but at codeship i'm getting following error:
Error: Could not find chromedriver at /home/rof/src/bitbucket.org/myrepo/myFirstRepo/node_modules/grunt-protractor-runner/node_modules/protractor/selenium/chromedriver.exe
Which i guess, a result of not having this "chromedriver.exe" at this path.
How can i solve it in codeship environment?
Thanks forwards
Add postinstall to your package.json file and that way npm install will take care of placing the binaries for you ahead of time:
"scripts": {
"postinstall": "echo -n $NODE_ENV | \
grep -v 'production' && \
./node_modules/protractor/bin/webdriver-manager update || \
echo 'will skip the webdriver install/update in production'",
...
},
And don't forget to set NODE_ENV ... not setting it at all will result in echo 'will skip the webdriver install/update in production' piece running. Setting it to dev or staging will get desired results.
Short answer (pulkitsinghal gave the original solution):
./node_modules/grunt-protractor-runner/node_modules/protractor/bin/webdriver-manager update
I'm one of the founders at Codeship.
The error seems to be because you are trying to use the exe file, but we're on Linux on our system. Did you hardcode that executable?
Could you send us an in-app support request so we have a link to look at and can help you fix this?