I am looking for a way to update the PATH in Jenkins for running Selenium tests with PyTest.
I need to run the latest version of chromedriver but due to an infrastructure deficiency (our base image is running Debian and the latest available version is 73 and I need to be running at least 83).
There is already a version of chromedriver installed on the image at /usr/bin and I need to be able to point to a different version
The jenkins chromedriver plugin appears that it will just use the latest available version for debian, which doesn't help me at all.
Until I have time to address the systemic issue, I'd like to just install chromedriver and update PATH - because Selenium requires chromedriver on the PATH
It seemed, for ease of use, that https://pypi.org/project/chromedriver-binary/ was a good solution - it installs just fine and the shell script chromedriver-path echoes the location, so I could just update path as the documentation shows: PATH=$PATH:chromedriver-path
This doesn't seem to jive in Jenkins - PATH is not updated
stages {
stage('build'){
steps {
withCredentials([...]) {
sh """
alias python=python3.8
python -m venv --system-site-packages venv # only for jenkins
python -u setup.py
. venv/bin/activate
which chromedriver #/usr/bin/chromedriver
chromedriver-path #path/to/python/lib/python3.8/site-packages/chromedriver_binary
export PATH=$PATH:`chromedriver-path`
which chromedriver #/usr/bin/chromedriver
"""
sh "python -m pytest"
}
}
}
}
I Have looked at the withEnv() option and environment{} step but I'm not sure how to access that binary and update PATH once chromedriver-binary has been set - because it appears that environment{} would not have access to shell scripts that are installed in the individual steps.
Any tips would be greatly appreciated
The issue may actually be in Jenkinsfile declaration.
Try using sh with single quotes '''. Also binaries are searched in directories defined in PATH from left to right, so to override system PATH, you must put your directory on the beginning, not on the end.
If I alter your code snippet:
stages {
stage('build'){
steps {
withCredentials([...]) {
sh '''
alias python=python3.8
python -m venv --system-site-packages venv # only for jenkins
python -u setup.py
. venv/bin/activate
which chromedriver #/usr/bin/chromedriver
chromedriver-path #path/to/python/lib/python3.8/site-packages/chromedriver_binary
export PATH=$(chromedriver-path):$PATH
echo $PATH # just to check the output, your path should be on the beginning
which chromedriver # this should now find the proper chromedriver
'''
sh "python -m pytest"
}
}
}
}
Related
I am unable to get a nodejs script loading Chrome to load a local file.
It works on 18.04 but not 22.04.
Has there been some significant change that would affect local file loading syntax, or is there something wrong in my code?
const { Builder } = require('selenium-webdriver')
async function start() {
const chrome = require('selenium-webdriver/chrome')
const options = new chrome.Options()
options.addArguments('--disable-dev-shm-usage')
options.addArguments('--no-sandbox')
const driver = new Builder()
.forBrowser('chrome')
.setChromeOptions(options)
.build()
await driver.get('file://' + __dirname + '/myfile.html')
await driver.sleep(10000)
const text = await driver.executeScript('return document.documentElement.innerText')
console.log(text)
driver.quit()
}
start()
The result is:
Your file couldn’t be accessed
It may have been moved, edited or deleted.
ERR_FILE_NOT_FOUND
I can confirm that myfile.html is definitely present.
Using console.log to show the file argument value shows it is the same for both older and newer Ubuntu.
Changing the driver.get argument to a website, e.g. https://www.google.com/ correctly shows the webpage content in the output.
The local file code fails as above using:
Ubuntu 22.04.1 LTS
Node v16.15.0
chromium-browser Chromium 108.0.5359.71 snap
It works fine on:
Ubuntu 18.04.6 LTS
Node v16.15.0
Chromium 107.0.5304.87 Built on Ubuntu , running on Ubuntu 18.04
This seems to be because the default Ubuntu 22.04 Chrome package is a snap package that limits local file access to /home/ .
Switching to the .deb distribution from Google solves the issue. The Chromedriver also needs to match.
# Chrome browser - .deb package, not the standard OS Snap which limits access to only /home/
# See: https://askubuntu.com/questions/1184357/why-cant-chromium-suddenly-access-any-partition-except-for-home
# See: https://www.ubuntuupdates.org/ppa/google_chrome?dist=stable
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
apt-get update
apt-get install google-chrome-stable
# Chromedriver, matching version number
# See: https://skolo.online/documents/webscrapping/#pre-requisites
# See step 3 at: https://tecadmin.net/setup-selenium-chromedriver-on-ubuntu/
# See: https://sites.google.com/chromium.org/driver/
#!# NB This will become out of sync if apt-get updates google-chrome-stable
google-chrome --version
wget https://chromedriver.storage.googleapis.com/108.0.5359.71/chromedriver_linux64.zip
unzip chromedriver_linux64.zip
sudo mv chromedriver /usr/bin/chromedriver
sudo chown root:root /usr/bin/chromedriver
sudo chmod +x /usr/bin/chromedriver
rm chromedriver_linux64.zip
I need to run Google Chrome remotely on a virtual machine using SSH. I do not want xforwarding - I want to utilize the GPU available on the vm. When I try running google-chrome I get following error:
[19615:19615:0219/152933.751028:ERROR:browser_main_loop.cc(1512)] Unable to open X display.
I've tried to setting my DISPLAY env value to various values:
export DISPLAY=localhost:0.0
export DISPLAY=127.0.0.1:0.0
export DISPLAY=:0.0
I've also tried replacing 0.0 in abowe examples with different values.
I have ForwardX11 no in /etc/ssh/sshd_config
I tried setting up target like this:
systemctl isolate multi-user.target
When I try to run sudo lshw -C display i get folowing output:
*-display
description: VGA compatible controller
product: Hyper-V virtual VGA
vendor: Microsoft Corporation
physical id: 8
bus info: pci#0000:00:08.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: vga_controller bus_master rom
configuration: driver=hyperv_fb latency=0
resources: irq:11 memory:f8000000-fbffffff
*-display UNCLAIMED
description: VGA compatible controller
product: GM204GL [Tesla M60]
vendor: NVIDIA Corporation
physical id: 1
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list
configuration: latency=0
resources: iomemory:f0-ef iomemory:f0-ef memory:41000000-41ffffff memory:fe0000000-fefffffff memory:ff0000000-ff1ffffff
I've tried to update my gpu drivers by:
wget https://www.nvidia.com/content/DriverDownload-March2009/confirmation.php?url=/tesla/375.66/nvidia-diag-driver-local-repo-rhel7-375.66-1.x86_64.rpm
yum -y install nvidia-diag-driver-local-repo-rhel7-375.66-1.x86_64.rpm
But after that I still see UNCLIMED next to my NVIDIA gpu.
Aby ideas?
You can try with Xvfb. it does not require additional hardware.
Install Xvfb if you didn't install it yet and do the following steps.
sudo apt-get install -y xvfb
Dependencies to make "headless" chrome/selenium work:
sudo apt-get -y install xorg xvfb gtk2-engines-pixbuf
sudo apt-get -y install dbus-x11 xfonts-base xfonts-100dpi xfonts-75dpi xfonts-cyrillic xfonts-scalable
Optional but nifty: For capturing screenshots of Xvfb display:
sudo apt-get -y install imagemagick x11-apps
Make sure that Xvfb starts every time the box/vm is booted:
Xvfb -ac :99 -screen 0 1280x1024x16 &
export DISPLAY=:99
Run Google Chrome
google-chrome
Okay guys. I found my problem after 2 hours of going crazy. My box was configured correctly. What you can NOT do, is ssh from one box, to another box, to this box and expect X11 forwarding to play nicely. Without tearing apart the entire network, I found that if I shelled over from the MAIN box to this box ( no double or triple ssh'ing) chrome comes right up as a regular user using CLI. So it was a matter of multiple shells from multiple boxes that made the display say it was set to NOTHING! Setting the display manually only complicates the problems. Once I shelled directly over to this box from the main outside box, my display was set to 10:0, which is first instance in my configuration. Don't make this mistake, you will waste valuable time.
FWIW, I ran into this when using SSH to log into a Selenium chrome node in a Docker compose stack. Chrome would launch if I invoked it as root with sudo -u seluser google-chrome, but not if I logged in as seluser. The trick turned out to be that root had DISPLAY set to :99:0, and seluser didn't have it set at all. If I set it explicitly (either from a seluser shell or from the docker compose exec command line) it worked.
$ docker-compose exec -u seluser \
selenium-chrome \ # or whatever your service is called
/bin/bash
seluser#c02cda62b751:/$ export DISPLAY=:99:0
seluser#c02cda62b751:/$ google-chrome http://app.test:3000/home
or
$ docker-compose exec -u seluser -e DISPLAY=:99:0 \
selenium-chrome \
google-chrome http://app.test:3000/home
That :99.0 is undocumented, though, so if this isn't working, you might try checking root's DISPLAY value with:
docker-compose exec -u root selenium-chrome bash -c 'echo "${DISPLAY}"'
I faced the same issue with WSL and Ubuntu. I have unininstalled/Reset the ubuntu. After that, I have executed the below command
wsl --set-default-version 2
then I installed again the Ubuntu, I didn't get --no-sandbox issue or any issue.
Hope this will use for someone.
I'm trying to get unison working after upgrading to Mac OS X Catalina. Unfortunately, macports installs a more recent version of ocaml (4.08.1), which means that the unison 2.51.2 release won't compile.
Well, that's no problem, I just update to git master on unison, and recompile. Unfortunately, this fails at sync time because the version of ocaml used to compile on the mac (4.08.1) is different from the one used to compile on the other machine (4.07.1). Sigh. Okay, use opam magic to install 4.07.1 on my machine. Everything should be fine, right? No!
Here's the error:
Connected [//zzzmyhost//home/clements/unison-home -> //zzzmyotherhost//Users/clements/clements]
Looking for changes
Uncaught exception Failure("input_value: ill-formed message")
Raised at file "/private/tmp/unison/src/lwt/lwt.ml", line 126, characters 16-23
Called from file "/private/tmp/unison/src/lwt/generic/lwt_unix_impl.ml", line 102, characters 8-23
Called from file "/private/tmp/unison/src/update.ml" (inlined), line 2105, characters 2-69
Called from file "/private/tmp/unison/src/uitext.ml", line 978, characters 16-56
Called from file "/private/tmp/unison/src/uitext.ml", line 1066, characters 6-90
Called from file "/private/tmp/unison/src/uitext.ml", line 1088, characters 19-66
Called from file "/private/tmp/unison/src/uitext.ml", line 1144, characters 21-43
What's going on?
Sigh... the problem here (very non-obvious) is actually with a corrupted/wrong-format syncronization file, created when doing the failed sync in the earlier test.
The solution is just to go into ~/Library/Application Support/Unison (on a UNIX machine this path would presumably live in ~/.unison and delete the archive file that's causing the problem (probably the most recent one). In a pinch, just delete all of the archive files and start over.
I've got the same problem between Windows and Ubuntu 20.04 after upgrading from Ubuntu 18.04. I tried the binary from Ubuntu 18.04 in 20.04, which still fails, so the incompatibility is likely inside one of the dependencies.
As a workaround I created a Docker image based on Ubuntu 18.04:
FROM ubuntu:18.04
RUN apt-get update && apt-get dist-upgrade -y
RUN apt-get install unison -y
RUN useradd martin --home /home/martin
WORKDIR /home/martin
USER martin
Building it with docker build -t unison:18.04 .
And then I added a wrapper to ~/bin/unison-2.48.4-docker:
#!/bin/bash
docker run --rm -i \
-v /home/martin/dirtosync:/home/martin/dirtosync \
-v /home/martin/.unison:/home/martin/.unison \
--hostname $(hostname) \
unison:18.04 unison "$#"
Setting the --hostname is important, since the hostname is part of the archive file.
Inside the profile on my Windows machine I configured:
servercmd = ~/bin/unison-2.48.4-docker
In my setup with two windows clients and one Ubuntu 18.04 server, connected by ssh, the problem startet with a second server running on Ubuntu 20.04. Neither the old server nor the windows clients could sync with the new machine.
My solution: Copying the binary from Ubuntu 18.04 to a new directory in the Ubuntu 20.04 machine. This new file is referenced in the "authorized_keys" file of ssh on the new machine.
So far, everything works great with unison 2.48.4.
I am working on creating an automated unit testing system which will utilise docker to test individual student assignments, written in Python, against a single unit test file.
I have created a website where students can upload their assignments but I'm a little but unsure as to how to get the automation with Docker working.
The workflow looks something like this:
A student uploads an assignment for marking
This is copied to a linux host which contains docker
The file sits here while it waits to be tested
So, say I had twenty student uploading there .py files, named as their unique student numbers, could I:
Create a Docker container which runs Ubuntu and Python
Copy the student file and unit test into this container
Run the unit test
Output the results as a text file
Copy this text file back to my webserver to display the results
Could somebody point me in the right direction to get started with this automation? I'm really just after some help of the Docker side of things, not on copying the files from my webserver to the Docker host.
Thanks.
Yes, it is possible to use Docker for that.
The Dockerfile would look like this:
FROM ubuntu
MAINTAINER xxx <user#example.org>
# update ubuntu repository
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update
# install ubuntu packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python python-pip
# install python requirements
RUN pip install ...
# define a mount point
VOLUME /student.py
# define command for this image
CMD ["python","/student.py"]
Now, you have to build this image with docker build -t student_test ..
To start the script and grab the output you can use:
docker run --volume /path/to/s12345.py:/student.py student_test > student_results_12345.txt`.
The --volume parameter is needed, to mount a student script to the defined mount point. Also, you could start multiple containers at once.
All paths are relative to current working directory.
Checkout the following project
https://github.com/CenturyLinkLabs/buildpack-runner
Uses Heroku buildpacks to create a docker image. Crazy but a neat idea if you get it working.
Can somebody tell me how to use the Chrome driver in Selenium for Linux platform?
I have my chrome driver location at username/home/chromedriver.
My code is:
System.setProperty("webdriver.chrome.driver", "/home/username/ChromeDriver/chromedriver");
driver = new ChromeDriver();
driver.get("facebook.com");
The error I am getting is:
org.openqa.selenium.WebDriverException: Unable to either launch or
connect to Chrome. Please check that ChromeDriver is up-to-date.
Using Chrome binary at: /opt/google/chrome/google-chrome
(WARNING: The server did not provide any stacktrace information)
From [the official documentation](https://github.com/SeleniumHQ/selenium/wiki/ChromeDriver:
Requirements
The ChromeDriver controls the browser using Chrome's automation proxy
framework.
The server expects you to have Chrome installed in the default
location for each system:
OS Expected Location of Chrome
-------------------------------------
Linux /usr/bin/google-chrome
Mac /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome
Windows XP %HOMEPATH%\Local Settings\Application Data\Google\Chrome\Application\chrome.exe
Windows Vista C:\Users\%USERNAME%\AppData\Local\Google\Chrome\Application\chrome.exe
For Linux systems, the ChromeDriver expects /usr/bin/google-chrome to be a symlink to the actual Chrome binary. See also the section on overriding the Chrome binary location.
Getting Started
To get set up, first
download the
appropriate prebuilt server. Make sure the server can be located on
your PATH or specify its location via the webdriver.chrome.driver
system property. Finally, all you need to do is create a new
ChromeDriver instance:
WebDriver driver = new ChromeDriver();
driver.get("http://www.google.com");
Therefore, download the version of chromedriver you need, unzip it somewhere on your PATH (or specify the path to it via a system property), then run the driver.
We have installed Successfully
sudo apt-get install unzip
wget -N http://chromedriver.storage.googleapis.com/2.10/chromedriver_linux64.zip -P ~/Downloads
unzip ~/Downloads/chromedriver_linux64.zip -d ~/Downloads
chmod +x ~/Downloads/chromedriver
sudo mv -f ~/Downloads/chromedriver /usr/local/share/chromedriver
Change the directory to /usr/bin/chromedriver
sudo ln -s /usr/local/share/chromedriver /usr/local/bin/chromedriver
sudo ln -s /usr/local/share/chromedriver /usr/bin/chromedriver
Now run the script and add the following in the environment file.
Capybara.register_driver :chrome do |app| client = Selenium::WebDriver::Remote::Http::Default.new Capybara::Selenium::Driver.new(app, :browser => :chrome, :http_client => client) end
Capybara.javascript_driver = :chrome
Note : Change the chrome driver version according to your operating system type like 32 bit or 64 bit.
Here is a complete script for Linux 18.04 to install Google Chrome and the chrome driver. It should automatically adjust to collect the correct chrome driver for the browser.
#!/usr/bin/env bash
# install the latest version of Chrome and the Chrome Driver
apt-get update && apt-get install -y libnss3-dev
version=$(curl http://chromedriver.storage.googleapis.com/LATEST_RELEASE)
wget -N http://chromedriver.storage.googleapis.com/${version}/chromedriver_linux64.zip
unzip chromedriver_linux64.zip -d /usr/local/bin
chmod +x /usr/local/bin/chromedriver
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
For me worked with these commands:
Unziped the file -> unzip -q chromedriver_linux64.zip
Force the copy to the directory 'usr/bin' -> sudo mv -f chromedriver /usr/bin
The selenium code was something like that.
System.setProperty("webdriver.chrome.driver","/usr/bin/chromedriver");
WebDriver driver = new ChromeDriver();
driver.get("https://mvnrepository.com");
driver.close();
You can see small example from this example
For linux, i downlaod chrome driver and keep as system path variable(or put in exist path folder). And from code i use following ways (add property and initiate with path of chrome driver)
System.setProperty("webdriver.chrome.driver", "/usr/local/bin/chromedriver");
ChromeDriverService service = new ChromeDriverService.Builder()
.usingDriverExecutable(new File("/usr/local/bin/chromedriver"))
.usingAnyFreePort()
.build();
try {
service.start();
} catch (IOException e) {
e.printStackTrace();
}
return new RemoteWebDriver(service.getUrl(), DesiredCapabilities.chrome());