While trying nto dockerise selenium End2End tests using the selenium docker image 'selenium/standalone' i get the error : Error retrieving a new session from the selenium server Connection refused! Is selenium server started?
ye selenium server starts up according to the console output..any ideas ?
FROM selenium/standalone-chrome
USER root
# installing node
RUN apt-get update
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_7.x | bash
RUN apt-get install -y nodejs
RUN node -v
RUN npm -v
# Installing Yarn
#RUN rm -r /usr/local/bin/yarn
RUN npm install -g -y yarn
ENV PATH $PATH:/usr/local/bin/yarn
#copying files
WORKDIR /app
COPY . .
# debug
RUN ls -alh .
#installing yarn
RUN yarn install
EXPOSE 4444
RUN yarn
CMD yarn test
The problem is your approach of solving this. See you are inheriting your image from selenium/standalone-chrome which is supposed to run a Selenium browser. Now is this image you are adding your tests and specifying the CMD to run the tests.
When you build and launch this image, you don't get any browser because the CMD has been overridden by you to run the test. When we build in docker we keep dependent services in different containers. It is preferred to run 1 service/process per container in most case. In your case when the test is run the browser server process is missing, so that is the reason for connection refused.
So you need to be running two containers here. One for selenium/standalone-chrome and one for your test.
Also your image should inherit from node: and not from selenium chrome image. You should not have node -v and npm -v commands also while building images. They create extra layers in your final image
FROM node:7
USER root
# installing node
RUN apt-get update && apt-get install -y curl
# Installing Yarn
RUN npm install -g -y yarn
ENV PATH $PATH:/usr/local/bin/yarn
#copying files
WORKDIR /app
COPY . .
# debug
#installing yarn
RUN yarn install
RUN yarn
CMD yarn test
Now you need to create a docker-compose file to run a composition which has both your test and chrome
version: '3'
services:
chrome:
image: selenium/standalone-chrome
tests:
build: .
depends_on:
- chrome
Install docker-compose and run docker-compose up command to run the above composition. Also in your tests make sure to use the URL as http://chrome:4444/wd/hub and use the Remote webdriver and not the local driver.
i used a selenium/node-chrome image, but what resolved it was making sure my chromedriver + selenium server + nightwatch were set to the latest versions in my package.json
Related
I’m configuring a very simple CI job. GitLab Runner is running on my own server, the specific runner for this project has been registered, with the shell executor, as I want to simply run shell commands.
stages:
- build
build:
stage: build
script:
- npm install
- npm run build
artifacts:
paths:
- "public/dist/main.js"
only:
- master
The job fails at the first command, npm install, with npm: command not found. I just installed npm and node via npm. If I SSH on my server and run npm -v, I can see version 8.5.5 is installed. If I sudo su gitlab-runner, which I suppose is what GitLab Runner is running as, npm -v works just as well.
I installed npm while gitlab-runner was already running. So I ran service gitlab-runner restart, thinking that it had to reevaluate its PATH, but it didn’t fix the issue.
I fixed it by simply adding this command before npm install: . ~/.bashrc.
I’m not sure why gitlab-runner didn’t properly read .bashrc before, even though I restarted it. Maybe it’s not supposed to? That would be contrary to what’s said in the GitLab CI runners docs.
N.B.: A key element in me being able to debug this was to clone the repo on a folder on my server, cd into it, and run gitlab-runner exec shell build after any (local) change to .gitlab-ci.yml. Skipping the whole commit + push + wait was a huge time (and sanity) saver.
I have a small Angular app which I am try to build using gitlab-ci and node docker image, when I try to run the test using the command npm run test it fails with the following error :
ERROR [launcher]: No binary for Chrome browser on your platform. Please, set "CHROME_BIN" env variable.
gitlab-ci.yml
stages:
- build
variables:
NPM_CONFIG_REGISTRY: https://test.com/xx/api/npm/npm-all
build:
stage: build
image: node:12.9
script:
- npm install
- npm run build:prod
- npm run test
tags:
- DOCKER
In the above code npm run test executes ng test as configured in the package.json
I was able to run the build but when I run the test it looks for a chrome browser, I also tried running the test in a headless way using the below command but resulted in the same error :
ng test --no-watch --browsers=ChromeHeadless
How do I add the chrome feature to this build ?
Either install Chrome by yourself or try an existing Docker image that already includes it.
This is a followup to How can I add and use nvm in a DDEV web container?
My dockerfile now looks like this:
ARG BASE_IMAGE
FROM $BASE_IMAGE
ENV NVM_DIR=/usr/local/nvm
ENV NODE_DEFAULT_VERSION=v8.16.1
RUN curl -sL https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh -o install_nvm.sh
RUN mkdir -p $NVM_DIR && bash install_nvm.sh
RUN echo "source $NVM_DIR/nvm.sh" >>/etc/profile
RUN bash -ic "nvm install $NODE_DEFAULT_VERSION && nvm use $NODE_DEFAULT_VERSION"
RUN chmod -R ugo+w $NVM_DIR
RUN npm install -g foundation-cli
RUN npm install -g gulp-cli
RUN yarn --cwd foundation-src install
The last line returns an error: Service 'web' failed to build: The command '/bin/sh -c yarn --cwd foundation-src install' returned a non-zero code: 1'
When I ddev ssh and then run yarn --cwd foundation-src install it does the job (running yarn in the foundation-src folder).
I also tried RUN (cd foundation-src; yarn install;) but no luck either. I prefer the first command anyway. But what is going on? Why can I run stuff from inside the container but not from the dockerfile?
Your command is RUN yarn --cwd foundation-src install - it's assuming that there is a subdirectory "foundation-src" under the current directory.
But the Dockerfile is running long, long before your source is anywhere useful. The container has not been run yet, nothing is mounted. So you can't do things that require your source code to be present.
Since this command appears to require your source code to be present, I think you'll be better doing this as a post-start hook, perhaps
hooks:
post-start:
- exec: "yarn --cwd foundation-src install"
Since actions like yarn install happen irregularly, it's also easy to ddev exec yarn --cwd foundation-src install and also easy to create a custom command to do that when you need it.
I am trying to setup a local development environment consisting of:
Ubuntu server vagrant box
My existing vuejs project created using Vue CLI 3 and passed to vagrant via synced_folder
Then run yarn run serve and access this on my host using port forwarding on the vagrant box.
Background:
I have developed a vue CLI 3 project on my Ubuntu 16.04 laptop which is working well, however, I want to move this inside of a vagrant box to keep my local machine tidy. I currently use yarn run serve which works well. I want to be able to run this command inside a new vagrant development environment.
Summary of Problems/Issues:
the vue command is not found after installing its dependencies
permission issues spat out by yarn when attempting to run yarn run serve inside vagrant box
there is an fsevents#1.2.4 message when yarn global add #vue/cli
Provisioning the local dev environment:
The Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.define "webserver_dev" do |webserver_dev|
webserver_dev.vm.box = "ubuntu/xenial64"
webserver_dev.vm.network "private_network", ip: "192.168.33.10"
webserver_dev.vm.network "forwarded_port", guest: 80, host: 8888
webserver_dev.vm.network "forwarded_port", guest: 8080, host: 8080
webserver_dev.vm.hostname = "develop.dev"
webserver_dev.vm.synced_folder ".", "/var/www", :mount_options => ["dmode=777", "fmode=666"]
webserver_dev.ssh.forward_agent = true
webserver_dev.vm.provider "virtualbox" do |vb|
vb.memory = "1824"
vb.cpus = "2"
end
end
end
Provisioning of the vagrant box: ubuntu/xenial64 (virtualbox, 20180802.0.0):
sudo apt update && sudo apt upgrade
sudo apt install build-essential libssl-dev -y
# install node and npm:
cd ~
curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
sudo bash nodesource_setup.sh
# install yarn
curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt-get update && sudo apt-get install yarn
# Show installed versions
yarn -v (outputs 1.9.4)
node -v (outputs v10.9.0)
npm -v (outputs 6.2.0)
Problems/Issues Output:
When I navigate to my existing vue project folder and run yarn run serve inside vagrant ssh I get the following error:
yarn run v1.9.4
$ vue-cli-service serve
/bin/sh: 1: vue-cli-service: Permission denied
error Command failed with exit code 126.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
When I run sudo yarn run serve
(I shouldn't have to run this as root anyway but:)
yarn run v1.9.4
$ vue-cli-service serve
/bin/sh: 1: vue-cli-service: Permission denied
error Command failed with exit code 126.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Running vue --version
vagrant#cc:~$ vue --version
No command 'vue' found, did you mean:
Command 'vpe' from package 'texlive-latex-extra' (universe)
vue: command not found
Output from running yarn global add #vue/cli
As shown in the official vue-cli installation documentation
NOTE: The fsevents#1.2.4 message I get. Could this what is causing the problems?
vagrant#cu:~$ yarn global add #vue/cli
yarn global v1.9.4
[1/4] Resolving packages...
[2/4] Fetching packages...
[----------------------------------------------------------------------------------------------------------------------------------------] 0/617(node:7694) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
info fsevents#1.2.4: The platform "linux" is incompatible with this module.
info "fsevents#1.2.4" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Installed "#vue/cli#3.0.1" with binaries:
- vue
Done in 58.25s.
Summary:
Has anyone out there achieved a local development environment where they are successfully able to run yarn run serve inside it and access the result on their host machine?
I would be very interested to see how other developers approach their local development for vue js projects which also have other services requiring reverse proxys (eg node js app running on a different port).
I have spent an awful lot of time trying to set this up to no avail. Maybe these tools just don't play well together. If you think you could help I would be very grateful. Thanks
Temporary Workaround: - inspired by this post
After further troubleshooting I found the problem is certainly a permissions related issue (not related to vue-cli).
I think that as my vagrant is using virtualbox, there is a virtualbox issue with symbolic links from the synced folder from my host, which may change permissions and stops chmod command from having any effect on files. In my case the execute flag from node_modules/.bin directory was not-executable.
For anyone with similar issues here is my current workaround to this issue (Do yourself a favour and read https://github.com/hashicorp/vagrant/issues/713 I wish I'd found it earlier!):
1) copy the projects package.json to: the yarn global directory: /home/vagrant/.config/yarn/global/
cp /var/www/project/package.json /home/vagrant/.config/yarn/global/
2) Install the projects dependencies globally in the yarn users directory:
cd /home/vagrant/.config/yarn/global
yarn install
3) Now returning to the project and running yarn run serve works as it uses the node_modules from /home/vagrant/.config/yarn/global/node_modules/.bin/ which has the correct executable permissions.
cd /var/www/project/package.json
yarn run serve
Example of cause of the issue:
1) Change directory to your project and ls -l to see permissions:
cd /var/www/project/node_modules/.bin
ls -la
Outputs:
lrw-rw-rw- 1 vagrant vagrant 18 Aug 29 00:21 which -> ../which/bin/which
2) Attempt to make file executable:
chmod 777 ./which (adding sudo doesn't make this work either)
Outputs:
lrw-rw-rw- 1 vagrant vagrant 18 Aug 29 00:21 which -> ../which/bin/which
OLD ANSWER - DIDN'T WORK: The solution I am now using is taken from here: Source
Adding this to the Vagrantfile enables symbolic links to work properly.
I am using ubuntu on my host and guest machines so can't be sure this will work for Mac and Windows.
config.vm.provider "virtualbox" do |v|
v.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/vagrant", "1"]
end
Further Reading:
https://github.com/hashicorp/vagrant/issues/713
You need to use sudo when installing vue-cli.
The vue-cli documentation at https://cli.vuejs.org/guide/installation.html states that:
To install the new package, use one of the following commands. You
need administrator privileges to execute these unless npm was
installed on your system through a Node.js version manager (e.g. n or
nvm).
npm install -g #vue/cli
# OR
yarn global add #vue/cli
I am using Node 6.10.1 and npm 3.10.10 on a Dell XPS 15 running Ubuntu 16.04 with Kernel 4.13.0.0-36-generic.
I am behind a corporate proxy which is configured through cntlm.
When I run an npm install -d on a project It works from a short time, and after a while I get Error: socket hang up.
I have found numerous questions about my problem but no solution seemed to work.
Here is an extract of a npm config list :
; cli configs
user-agent = "npm/3.10.10 node/v6.10.1 linux x64"
; userconfig /home/msb/.npmrc
https-proxy = "http://localhost:3128/"
registry = "http://urlTocorporateRegistryWhichWorksOnOtherComputers"
strict-ssl = false
; node bin location = /home/msb/.nvm/versions/node/v6.10.1/bin/node
; cwd = /home/msb
; HOME = /home/msb
; "npm config ls -l" to show all defaults.
I cannot change the registry since we are using some internal modules, and I have to keep the current versions of node/npm.
I have already tried :
Using the proxy directly in npm config rather than through cntlm
Limiting my upload/download capabilities with trickle through the command trickle -s -d 100 -u 100 npm install -d
Another indication : It works on Windows, and I have a collegue running Ubuntu 17.04 on a slower pc and it works for him. We think my machine might be a bit too brutal when requesting the registry. Does anyone know a way to slow npm requests ?
It used to work through yarn but some new developments have forced me to go back to npm.
Has anyone encountered and corrected this problem ?
Thanks for your help.
I experimented the same problem, with no apparent reason, on Ubuntu 18.04.
I finally used docker with bind mounts to solve it. The steps are the following:
Create a dockerfile with the following elements (you can also directly run with the used image if you don't need to configure a proxy like me)
FROM node:6.10.1
ENV HTTPS_PROXY "http://yourproxy:yourport/"
# Different RUN commands to configure npm and git corporate proxy
WORKDIR /home/root/
Build the image (from the dockerfile's folder): docker image build -f npm-installer/Dockerfile -t custom-npm-installer .
Go inside the project folder where you would normally run npm install
Run the following command to run the container interactively: docker container run -it --network host -v </host/path/to/pj>:/home/root/pj-to-install --name custom-npm-installer custom-npm-installer bash
You can now run the npm install command from the container. Careful however, you'll then need to use chmod on the node_modules folder recursively since the container uses root by default.
Another thing, if you're using node-sass, it is most of the time compiled on the fly when npm installing, and matches your OS current version/architecture. So if your linux distribution is not exactly the same than the container's you might need to recompile node-sass on your host after running npm install on the container. No worries though, node-sass will give you the command to run the moment you launch your application.