I am having issues to run the version 92 with the browser open, in headless mode is working fine.
I am currently running my tests on a docker container. The installation of the chrome-driver follows:
RUN curl -s https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list && \
apt-get update && \
apt-get install -y \
xvfb \
google-chrome-stable=92.\* \
unzip
RUN curl -s -o /tmp/chromedriver.zip "https://chromedriver.storage.googleapis.com/$(curl -s https://chromedriver.storage.googleapis.com/LATEST_RELEASE_92)/chromedriver_linux64.zip" && \
unzip /tmp/chromedriver.zip chromedriver -d /usr/bin/ && \
chmod +x /usr/bin/chromedriver
after executing my command to run the tests, the browser open but nothing happens.
I get stuck on data; on the address bar, and page keeps loading forever.
I'm currently using the chrome options:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--incognito")
chrome_options.add_argument("--disable-dev-shm-usage")
Looks like it was related to an old Nvidia driver running on Linux:
You may need to run with --disable-gpu on Linux with NVIDIA driver older than 295.20
Source: chromium documentation
So, adding chrome_options.add_argument("--disable-gpu") solved my problem.
Related
I'm currently evaluating Selenium in combination with GitLab CI as a testing tool for our website. This is my current .gitlab-ci.yml:
variables:
GIT_STRATEGY: clone
GIT_DEPTH: 0
stages:
- tests
test:
stage: tests
image: node:latest
tags:
- linux
before_script:
- apt-get update
- apt-get install -y chromium
- npm install -g selenium-side-runner
- npm install -g chromedriver
script:
- selenium-side-runner My-UI-Test.side
I'm getting the following error:
FAIL ./DefaultSuite.test.js
● Test suite failed to run
WebDriverError: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/chromium is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
at Object.throwDecodedError (../../../../../../usr/local/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/error.js:550:15)
at parseHttpResponse (../../../../../../usr/local/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/http.js:560:13)
at Executor.execute (../../../../../../usr/local/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/http.js:486:26)
I've searched for the error message DevToolsActivePort file doesn't exist and it seems that Chrome doesn't like to be run with root privileges. A lot of answers suggest using the --no-sandbox or --disable-dev-shm-usage flags. But those are Chrome flags, and since I'm not calling Chrome directly, I can't use them. The website in question is also deployed from a different project, so I have no code to work with. The only files I can change are My-UI-Test.side and .side.yaml.
I have a separate project for my e2e tests, into which I've added a Dockerfile, my selenium .side file, as well as a config file .side.conf. This project uses the gitlab docker registry to upload the project as an image, that can be loaded directly into the gitlab-ci.
Here are my files for the e2e test project:
package.json
...
"scripts": {
"test": "selenium-side-runner test.side"
},
...
"dependencies": {
"selenium-side-runner": "^3.17.0",
"chromedriver": "^101.0.0"
}
These are the options that I am using, you might want to adjust a few things here and there. The capabilities are pretty much what you want, though.
I've also added the baseUrl key to this file instead of directly into the package.json, because I use the same image for several environments with changing URLs, that I'm replacing in my before_script whenever needed. (I left this out below, as your use case probably differs)
.side.yml
capabilities:
browserName: "chrome"
goog:chromeOptions:
binary: /usr/bin/google-chrome-stable
args:
- no-sandbox
- disable-dev-shm-usage
- headless
- nogpu
output-directory: results
output-format: junit
baseUrl: <baseURL>
The Dockerfile might include a few useless dependencies, you can likely remove a lot of them. Many of those are just copied over from my puppeteer Dockerfile, as they are using the google-chrome-stable binary very similarly. Downloading the fonts with the google-chrome-stable binary might not be needed in your case as well. So just adjust it to your needs.
Dockerfile
FROM node:14
RUN apt update
RUN apt install -y \
rsync \
grsync \
gnupg \
ca-certificates \
fonts-liberation \
libappindicator3-1 \
libasound2 \
libatk-bridge2.0-0 \
libatk1.0-0 \
libc6 \
libcairo2 \
libcups2 \
libdbus-1-3 \
libexpat1 \
libfontconfig1 \
libgbm1 \
libgcc1 \
libglib2.0-0 \
libgtk-3-0 \
libnspr4 \
libnss3 \
libpango-1.0-0 \
libpangocairo-1.0-0 \
libstdc++6 \
libx11-6 \
libx11-xcb1 \
libxcb1 \
libxcomposite1 \
libxcursor1 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxi6 \
libxrandr2 \
libxrender1 \
libxss1 \
libxtst6 \
lsb-release \
wget \
xdg-utils
RUN apt-get update \
&& apt-get install -y wget gnupg \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-stable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf libxss1 \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY package.json /app
COPY .side.yml /app
COPY test.side /app
COPY results /app
RUN npm i -g chromedriver --unsafe-perm
RUN npm i -g selenium-side-runner --unsafe-perm
RUN npm install
And here I include it into my gitlab CI
gitlab-ci.yml:
e2e:
stage: test
image: <image-of-above-project>:1.0
variables:
GIT_STRATEGY: none
script:
- cat .side.yml
- npm run test
If you need more information on container registry, head to here: https://docs.gitlab.com/ee/user/packages/container_registry/
I am trying to build a docker image for my selenium tests. However i keep getting error message " org.openqa.selenium.WebDriverException: unknown error: Chrome failed to start: crashed" .
Please do not mark this as Duplicate ,though I have referred to a lot of answers provided in the links below. I am still not able to get through this. I have tried all the answers that are provided but no luck .
Selenium: WebDriverException:Chrome failed to start: crashed as google-chrome is no longer running so ChromeDriver is assuming that Chrome has crashed
WebDriverException: unknown error: DevToolsActivePort file doesn't exist while trying to initiate Chrome Browser
Please find the docker file code and my selenium code.
Docker file code looks like this :
FROM selenium/standalone-chrome
FROM gradle
RUN gradle wrapper
USER root
RUN apt-get update; apt-get -y install wget gnupg2
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub |
apt-key add -
RUN echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable
main" >> /etc/apt/sources.list.d/google-chrome.list
RUN apt-get update; apt-get -y install google-chrome-stable
COPY . /project
RUN chown -R gradle:gradle /project
RUN wget -N
http://chromedriver.storage.googleapis.com/76.0.3809.25/chromedriver_linux64.zip -P ~/
RUN unzip ~/chromedriver_linux64.zip -d ~/
RUN rm ~/chromedriver_linux64.zip
RUN mv -f ~/chromedriver /project/executables/chromedriver
RUN chown gradle:gradle /project/executables/chromedriver
RUN chmod 0755 /project/executables/chromedriver
USER gradle
WORKDIR /project
ENV GRADLE_USER_HOME /project/.gradle_home
CMD gradle build --info
Selenium code :
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.addArguments("--headless");
chromeOptions.addArguments("start-maximized"); // open Browser in maximized mode
chromeOptions.addArguments("disable-infobars"); // disabling infobars
chromeOptions.addArguments("--disable-extensions"); // disabling extensions
chromeOptions.addArguments("--disable-gpu"); // applicable to windows os only
chromeOptions.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
chromeOptions.addArguments("--no-sandbox"); // Bypass OS security model
System.setProperty("webdriver.chrome.driver","executables/chromedriver");
Webdriver driver = new ChromeDriver(chromeOptions);
driver.get("http://google.com");
As you can see from the error message the chrome is starting at default location(usr/bin/google-chrome) but it is crashing .
Starting ChromeDriver 76.0.3809.25 (a0c95f440512e06df1c9c206f2d79cc20be18bb1-refs/branch-heads/3809#{#271}) on port 30275
Only local connections are allowed.
" org.openqa.selenium.WebDriverException: unknown error: Chrome failed to start: crashed" .
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
System info: host: 'd2e61fa0170d', ip: '172.17.0.2', os.name: 'Linux', os.arch: 'amd64', os.version: '4.9.125-linuxkit', java.version: '1.8.0_212'
Driver info: driver.version: ChromeDriver
I am using latest chrome driver 76.0.3809.25. I am assuming that latest google chrome is fetched and installed
Any help is appreciated
Seems like you are having issues with installing google chrome and its driver. Sharing you my Dockerfile and Docker-compose.yml. I achieved this using python. It also has the example for Firefox and PhantomJS.
FROM ubuntu:bionic
RUN apt-get update && apt-get install -y \
python3 python3-pip \
fonts-liberation libappindicator3-1 libasound2 libatk-bridge2.0-0 \
libnspr4 libnss3 lsb-release xdg-utils libxss1 libdbus-glib-1-2 \
curl unzip wget \
xvfb
# install geckodriver and firefox
RUN GECKODRIVER_VERSION=`curl https://github.com/mozilla/geckodriver/releases/latest | grep -Po 'v[0-9]+.[0-9]+.[0-9]+'` && \
wget https://github.com/mozilla/geckodriver/releases/download/$GECKODRIVER_VERSION/geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz && \
tar -zxf geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz -C /usr/local/bin && \
chmod +x /usr/local/bin/geckodriver && \
rm geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz
RUN FIREFOX_SETUP=firefox-setup.tar.bz2 && \
apt-get purge firefox && \
wget -O $FIREFOX_SETUP "https://download.mozilla.org/?product=firefox-latest&os=linux64" && \
tar xjf $FIREFOX_SETUP -C /opt/ && \
ln -s /opt/firefox/firefox /usr/bin/firefox && \
rm $FIREFOX_SETUP
# install chromedriver and google-chrome
RUN CHROMEDRIVER_VERSION=`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE` && \
wget https://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip && \
unzip chromedriver_linux64.zip -d /usr/bin && \
chmod +x /usr/bin/chromedriver && \
rm chromedriver_linux64.zip
RUN CHROME_SETUP=google-chrome.deb && \
wget -O $CHROME_SETUP "https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb" && \
dpkg -i $CHROME_SETUP && \
apt-get install -y -f && \
rm $CHROME_SETUP
# install phantomjs
RUN wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2 && \
tar -jxf phantomjs-2.1.1-linux-x86_64.tar.bz2 && \
cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs && \
rm phantomjs-2.1.1-linux-x86_64.tar.bz2
RUN pip3 install selenium
RUN pip3 install pyvirtualdisplay
RUN pip3 install Selenium-Screenshot
RUN pip3 install requests
RUN pip3 install pytest
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV PYTHONUNBUFFERED=1
ENV APP_HOME /usr/src/app
WORKDIR /$APP_HOME
COPY . $APP_HOME/
CMD tail -f /dev/null
CMD python3 example.py
Docker-compose.yml
selenium:
build: .
ports:
- 4000:4000
- 443:443
volumes:
- ./data/:/data/
privileged: true
I have tests with capybara in a docker container. I use this to setup selenium :
Capybara.register_driver :selenium do |app|
require 'selenium/webdriver'
Selenium::WebDriver::Firefox::Binary.path = ENV['FIREFOX_BINARY_PATH'] || Selenium::WebDriver::Firefox::Binary.path
Capybara::Selenium::Driver.new(app, :browser => :firefox)
end
It work when we run tests with xvfb but I want to see the real browser when tests are running so I'm looking for a way to use the browser in the host.
I think it's possible to launch geckodriver on the host and share the port 4444 but I didn't succeeded yet. Capybara launch a new instance of geckodriver, on the container, each time.
What can I do?
Edit 1 : Add more info
I all config I have for capybara :
#<Capybara::SessionConfig:0x0055ce67731a00
#always_include_port=false,
#app_host="http://domain-test.engagement.lvh.me:1300",
#automatic_label_click=false,
#automatic_reload=true,
#default_host="http://www.example.com",
#default_max_wait_time=5,
#default_selector=:css,
#enable_aria_label=false,
#exact=false,
#exact_text=false,
#ignore_hidden_elements=true,
#match=:smart,
#raise_server_errors=true,
#run_server=true,
#save_path=#<Pathname:/app/tmp/capybara>,
#server_errors=[StandardError],
#server_host=nil,
#server_port=1300,
#visible_text_only=false,
#wait_on_first_by_default=false>
Here is my docker-compose file :
version: '3'
services:
web:
build: .
command: rails s -b 0.0.0.0
working_dir: /app
volumes:
- .:/app
- ./tmp/bundle:/usr/local/bundle
- $SSH_AUTH_SOCK:/ssh-agent
environment:
- BUNDLE_JOBS=4
- SSH_AUTH_SOCK=/ssh-agent
- MONGO_HOST=mongo
- REDIS_HOST=redis
- MEMCACHE_HOST=memcache
ports:
- "80:3000"
- "1300:1300"
links:
- mongo
- redis
- memcache
mongo:
image: mongo:3.4.9
volumes:
- ~/data/mongo/db:/data/db
redis:
image: redis:2.8.17
volumes:
- ~/data/redis:/data
memcache:
image: memcached:1.5-alpine
And finally my Dockerfile :
FROM ruby:2.3.1
RUN apt-get update && apt-get install -y build-essential qt5-default \
libqt5webkit5-dev gstreamer1.0-plugins-base gstreamer1.0-tools gstreamer1.0-x \
xvfb rsync
ARG GECKODRIVER_VERSION=0.19.0
RUN wget --no-verbose -O /tmp/geckodriver.tar.gz https://github.com/mozilla/geckodriver/releases/download/v$GECKODRIVER_VERSION/geckodriver-v$GECKODRIVER_VERSION-linux64.tar.gz \
&& rm -rf /opt/geckodriver \
&& tar -C /opt -zxf /tmp/geckodriver.tar.gz \
&& rm /tmp/geckodriver.tar.gz \
&& mv /opt/geckodriver /opt/geckodriver-$GECKODRIVER_VERSION \
&& chmod 755 /opt/geckodriver-$GECKODRIVER_VERSION \
&& ln -fs /opt/geckodriver-$GECKODRIVER_VERSION /usr/bin/geckodriver
RUN apt-get install -y libgtk-3-dev \
&& wget --no-verbose https://ftp.mozilla.org/pub/firefox/releases/56.0/linux-x86_64/en-US/firefox-56.0.tar.bz2 \
&& tar -xjf firefox-56.0.tar.bz2 \
&& mv firefox /opt/firefox56 \
&& ln -s /opt/firefox56/firefox /usr/bin/firefox
ENV TZ Europe/Paris
RUN echo $TZ > /etc/timezone && \
apt-get update && apt-get install -y tzdata && \
dpkg-reconfigure -f noninteractive tzdata && \
apt-get clean
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6 && \
echo "deb http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4 main" | tee /etc/apt/sources.list.d/mongodb-org-3.4.list && \
apt-get update && \
apt-get install -y mongodb-org
RUN gem install bundler
RUN mkdir /app
WORKDIR /app
In order to get Selenium to use a remote geckodriver instance you need to provide the url option to it.
Capybara.register_driver :selenium do |app|
require 'selenium/webdriver'
Capybara::Selenium::Driver.new(app, :browser => :firefox, url: 'http://<your ip as reachable from docker>:<port geckodriver is available on>')
end
This will then require you to run geckodriver on the machine your want firefox to run on, possibly using the --binary option to specify where firefox is located. It will also probably require setting Capybara.app_host (and possibly Capybara.always_include_port depending on your exact configuration) so the browser requests are routed back to the app under test running on the docker instance.
Another thing to consider is that the AUT will need to be bound to an interface on the docker instance which is reachable from the host. By default Capybara binds to the 127.0.0.1 interface which probably isn't reachable, so you can set Capybara.server = '0.0.0.0' to bind to all available interfaces, or specify the specific external interface.
I use Python-Selenium in my spider (Scrapy), for using Selenium i should install xvfb on Scrapinghub.
when i use apt-get for installing xvfb i have this error message:
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
Is there any other way for installing xvfb on Scrapinghub?
UPDATE 1
I read this, I tried to use docker, I am stuck at this stage
shub-image init --requirements path/to/requirements.txt
i read this
If you are getting an ImportError like this while running shub-image init:
You should make sure you have the latest version of shub installed by
running:
$ pip install shub --upgrade
but i have always this error
Traceback (most recent call last):
File "/usr/local/bin/shub-image", line 7, in <module>
from shub_image.tool import cli
File "/usr/local/lib/python2.7/dist-packages/shub_image/tool.py", line 42, in <module>
command_module = importlib.import_module(module_path)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/shub_image/push.py", line 4, in <module>
from shub.deploy import list_targets
ImportError: cannot import name list_targets
did you try:
sudo apt-get install xvfb
Another way is to compile manually the packages, a sort of:
apt-get source xvfb
./configure --prefix=$HOME/myapps
make
make install
And the third way, is download the .deb from the source web page https://pkgs.org/download/xvfb
after download it, you can mv it to the path of the downloaded sources:
mv xvfb_1.16.4-1_amd64.deb /var/cache/apt/archives/
then you change your directory and do:
sudo dpkg -i xvfb_1.16.4-1_amd64.deb
and that's all!
I resolved my problems ( use selenium in scrapinghub )
1- for xvfb in docker i use
RUN apt-get install -qy xvfb
2- for creating docker image i used this
and for installing geckodriver i use this code
#
# Geckodriver Dockerfile
#
FROM blueimp/basedriver
# Add the Firefox release channel of the Debian Mozilla team:
RUN echo 'deb http://mozilla.debian.net/ jessie-backports firefox-release' >> \
/etc/apt/sources.list \
&& curl -sL https://mozilla.debian.net/archive.asc | apt-key add -
# Install Firefox:
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
firefox \
# Remove obsolete files:
&& apt-get clean \
&& rm -rf \
/tmp/* \
/usr/share/doc/* \
/var/cache/* \
/var/lib/apt/lists/* \
/var/tmp/*
# Install geckodriver:
RUN export BASE_URL=https://github.com/mozilla/geckodriver/releases/download \
&& export VERSION=$(curl -sL \
https://api.github.com/repos/mozilla/geckodriver/releases/latest | \
grep tag_name | cut -d '"' -f 4) \
&& curl -sL \
$BASE_URL/$VERSION/geckodriver-$VERSION-linux64.tar.gz | tar -xz \
&& mv geckodriver /usr/local/bin/geckodriver
USER webdriver
CMD ["geckodriver", "--host", "0.0.0.0"]
from here
I have the following docker-container
FROM ubuntu:xenial
MAINTAINER Hasan Kara
RUN set -xe \
\
# Install Java, Chrome, Xvfb, and unzip
&& apt-get update \
&& apt-get install -y \
openjdk-8-jre \
chromium-browser \
xvfb \
curl \
wget \
unzip \
&& rm -rf /var/lib/apt/lists/* \
&& ln -s /usr/lib/chromium-browser/chromium-browser /usr/bin/google-chrome \
\
# Download and install chrome drive and selenium server standalone
&& wget -q "https://chromedriver.storage.googleapis.com/2.27/chromedriver_linux64.zip" \
&& wget -q "http://selenium-release.storage.googleapis.com/3.0/selenium-server-standalone-3.0.1.jar" \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/local/bin \
&& mv selenium-server-standalone-3.0.1.jar /usr/local/bin
VOLUME /downloads
ENV DISPLAY :10
CMD export DISPLAY=:10 && Xvfb :10 -screen 0 1366x768x24 -ac & google-chrome --no-sandbox -remote-debugging-port=9222 & java -jar /usr/local/bin/selenium-server-standalone-3.0.1.jar &
EXPOSE 4444 9222
Which I run with:
docker run --rm -it --shm-size=512m --name chromium -p 4444:4444 -p 9222:9222 hasankarafhnw/seleniumchromium /bin/bash
And inside the bash I run by hand, because for some reason the CMD doesn't work...
export DISPLAY=:10 && Xvfb :10 -screen 0 1366x768x24 -ac & google-chrome --no-sandbox -remote-debugging-port=9222 & java -jar /usr/local/bin/selenium-server-standalone-3.0.1.jar &
Now I can perfectly connect to the selenium hub at "http://192.168.99.100:4444/wd/hub/static/resource/hub.html", but if try:
Create a Session through the hub
Create a Session with this code in the container.
Create a Session though a RemoteWebDriver running on the host-os.
I get the following error:
Only local connections are allowed.
INFO - Attempting bi-dialect
session, assuming Postel's Law holds true on the remote end
INFO - Executing: [take screenshot]) WARN - Exception thrown
org.openqa.selenium.NoSuchSessionException: no such session
Chromedriver: 2.27.440175
Chromium: 55.0.2883.87
Selenium-server-standalone: 3.0.1
Docker: 1.13
Host-OS: Win 7
RemoteDriveCode:
ChromeOptions options = new ChromeOptions();
options.addExtensions(new File("./extension_0_2_0_10.crx"));
DesiredCapabilities capabilities = DesiredCapabilities.chrome();
capabilities.setCapability(ChromeOptions.CAPABILITY, options);
WebDriver driver = new RemoteWebDriver(new URL( "http://192.168.99.100:4444/wd/hub"), capabilities);
driver = new Augmenter().augment(driver);
driver.get("http://google.com");
You need to setup whitelisted-ips argument for chromedriver executable (not chrome!). You can achive it by set system property webdriver.chrome.whitelistedIps when node starts, like in your command:
CMD export DISPLAY=:10 && Xvfb :10 -screen 0 1366x768x24 -ac & google-chrome --no-sandbox -remote-debugging-port=9222 & java -Dwebdriver.chrome.whitelistedIps= -jar /usr/local/bin/selenium-server-standalone-3.0.1.jar &