Python tests discovery fails with remote.ssh + conda env - testing

I'm using vcsode on windows with Remote SSH to develop python code hosted on linux.
My python environment is a conda env based on python3.7.
During the test discovery stage, the run_adapter.py script is launched and fails with the following log:
python /home/scharlois/.vscode-server/extensions/ms-python.python-2020.2.64397/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir /path/to/my/project -s --cache-clear tests
Test Discovery failed:
Error: ERROR 1: PROJ: proj_create_from_database: Open of /home/scharlois/.conda/envs/conda37/share/proj failed
I have no error when I execute the same command in the conda env. on the remote host
Which interpreter is used to run the run_adapter.py script? Is the conda python one ?

It is the python conda interpreter. (displayed modifying the run_adapter script)
I found a workaround: inserting in the script the folowing lines before the main execution:
import os
os.environ["PROJ_LIB"]=""

Related

Unable to install nextflow correctly

I've attempted to install Nextflow with command "curl get.nextflow.io | bash" in my Windows system command prompt-bash, but I consistently see this error: Unable to initialize nextflow environment
In parallel, on a remote server, the Nextflow is installed with the same command and I can verify that with ls command. However, when I prepare a sample script (hello.nf), executing it by "nextflow run hello.nf" shows that Nextflow is not characterised.
In any of cases, typing "nextflow info" doesn't recognise nextflow. Is there anyway that I can install Nextflow or have I missed any step? I have also tried with wget, but no success yet.
Thanks in advance!
If you have Windows 10 Version 1903 with Build 18362 or higher, you can use the Windows Subsystem for Linux 2 (WSL 2) rather than having to configure a Linux VM. There's a few hoops to jump through, but this guide (Oct 2021) should be sufficient: Setting up a Nextflow environment on Windows 10
On your remote machine, all you should need to do is to move your nextflow to somewhere in your $PATH. Alternatively, you can just call it with ./nextflow (assuming it is in your current directory).

Jenkins: ChromeDriver update PATH from shell script and use new version

I am looking for a way to update the PATH in Jenkins for running Selenium tests with PyTest.
I need to run the latest version of chromedriver but due to an infrastructure deficiency (our base image is running Debian and the latest available version is 73 and I need to be running at least 83).
There is already a version of chromedriver installed on the image at /usr/bin and I need to be able to point to a different version
The jenkins chromedriver plugin appears that it will just use the latest available version for debian, which doesn't help me at all.
Until I have time to address the systemic issue, I'd like to just install chromedriver and update PATH - because Selenium requires chromedriver on the PATH
It seemed, for ease of use, that https://pypi.org/project/chromedriver-binary/ was a good solution - it installs just fine and the shell script chromedriver-path echoes the location, so I could just update path as the documentation shows: PATH=$PATH:chromedriver-path
This doesn't seem to jive in Jenkins - PATH is not updated
stages {
stage('build'){
steps {
withCredentials([...]) {
sh """
alias python=python3.8
python -m venv --system-site-packages venv # only for jenkins
python -u setup.py
. venv/bin/activate
which chromedriver #/usr/bin/chromedriver
chromedriver-path #path/to/python/lib/python3.8/site-packages/chromedriver_binary
export PATH=$PATH:`chromedriver-path`
which chromedriver #/usr/bin/chromedriver
"""
sh "python -m pytest"
}
}
}
}
I Have looked at the withEnv() option and environment{} step but I'm not sure how to access that binary and update PATH once chromedriver-binary has been set - because it appears that environment{} would not have access to shell scripts that are installed in the individual steps.
Any tips would be greatly appreciated
The issue may actually be in Jenkinsfile declaration.
Try using sh with single quotes '''. Also binaries are searched in directories defined in PATH from left to right, so to override system PATH, you must put your directory on the beginning, not on the end.
If I alter your code snippet:
stages {
stage('build'){
steps {
withCredentials([...]) {
sh '''
alias python=python3.8
python -m venv --system-site-packages venv # only for jenkins
python -u setup.py
. venv/bin/activate
which chromedriver #/usr/bin/chromedriver
chromedriver-path #path/to/python/lib/python3.8/site-packages/chromedriver_binary
export PATH=$(chromedriver-path):$PATH
echo $PATH # just to check the output, your path should be on the beginning
which chromedriver # this should now find the proper chromedriver
'''
sh "python -m pytest"
}
}
}
}

WebDriverException: Message: invalid argument: can't kill an exited process in a Docker container returns odd errors

I'm running a small Python script that scrapes some data from a public website.
When I run the Dockerfile instructions line by line in an interactive terminal using the selenium:latest image the script runs fine.
docker run -it -v /Users/me/Desktop/code/scraper/:/scraper selenium/standalone-firefox bash
As soon as I run it using my Dockerfile and docker-compose file I get this error:
app_1 | selenium.common.exceptions.WebDriverException: Message: invalid argument: can't kill an exited process
I am using the MOZ_HEADLESS=1 env var. It's being passed properly.
I have tried running the script as someone other than root but then I get log errors.
Dockerfile
FROM selenium/standalone-firefox:latest
# https://github.com/SeleniumHQ/docker-selenium/issues/725
USER root
RUN apt-get update -y
RUN apt-get install -y firefox python-pip
WORKDIR /scraper
COPY . /scraper
RUN pip install -r /scraper/requirements.txt
ENV MOZ_HEADLESS=1
CMD ["python", "/scraper/browserscraper.py"]
If I run those instructions in the Dockerfile from an interactive terminal, I have no problems.
It either has to do with the user being root running the script via the Dockerfile or something about it missing a screen for output because I'm not actually SSH'ed in like I am running it from the command line with -it.
Any ideas?

Permissions denied

I'am trying to create a package for iOS with kivy.:https://kivy.org/docs/guide/packaging-ios.html
I try to run: ./toolchain.py build kivy in terminal on mac.
Error: sudo: unable to execute ./toolchain.py: Permission denied.
My python is setup in anaconda and is running correctly.
In the first line of ./toolchain.py is: #!/anaconda/envs/python2/bin python2.7
Anyone knows how to change the permissions/how to get it to work?
When I set python to default: /usr/bin/env and adjust first line of ./toolchain.py it does execute, but in default python I'am not able to install pip.
Ensure ./toolchain.py got execution bit set (chmod a+x toolchain.py)

Running headless firefox Xvfb with Jenkins to run selenium tests

I face with Error: no display specified error when running play framework tests in Jenkins at FreeBSD server.
So every time I face with timeout
org.openqa.selenium.firefox.NotConnectedException: Unable to connect to host 127.0.0.1 on port 7055 after 45000 ms. Firefox
Jenkins has:
1) Xvfb plugin installed
2) Play Framework installed
Tests are written using selenide library and selenide module for play framework.
Xvfb configured and enabled in job configuration.
Job console output is:
Checking out Revision 3f485bd2e3dbcfa058fc19f89ab18020e36707d8 (origin/trunk)
...
Xvfb starting$ /usr/local/bin//Xvfb :1 -screen 0 -fbdir /usr/local/jenkins/xvfb-9-786185694297443042.fbdir
...
Command detected: clean
Command detected: deps --sync
Command detected: precompile
Command detected: auto-test
[YalsTests] $ /srv/java/play/play clean
...
~ using java version "1.8.0_72"
[YalsTests] $ /srv/java/play/play auto-test
~ 14 tests to run:
~
~ selenium/front/CorrectInput... org.openqa.selenium.firefox.NotConnectedException: Unable to connect to host 127.0.0.1 on port 7055 after 45000 ms. Firefox console output:
Error: no display specified
at org.openqa.selenium.firefox.internal.NewProfileExtensionConnection.start(NewProfileExtensionConnection.java:113)
at org.openqa.selenium.firefox.FirefoxDriver.startClient(FirefoxDriver.java:271)
Job configuration:
[X] Start Xvfb before the build, and shut it down after.
Xvfb specific display name 1
Xvfb display name offset 0
Invoke Play Framework
Command set Play 1.x
Goals
Clean project [clean]
Custom parameter
Custom command deps --sync
Precompile all Java sources and templates [precompile]
Automatically run all application tests [auto-test]
The selenium tasks needs to know the DISPLAY that it shall connect to.
You can set it e.g. as an environment variable (don't forget to export it, if you do that in .profile)
export DISPLAY=:10
This is for bash, other shells might need a 2 step process:
DISPLAY=:10
export DISPLAY
You can also specify the variable at the command line before the command:
DISPLAY=:10 java -jar mySelenium.jar
You could avoid all these issues by using Selenoid project which starts headless browsers in parallel in Docker containers. Container images are created by considering compatible version of webdriver and browser. They also include fonts, flashplayer and so on. Just choose one of already existing images and run your tests. No need to install Java to run Selenium tests.
I tend to supply this if I'm running my tests via Jenkins:
xvfb-run --server-args="-screen 0, 1920x1080x16" mvn clean install...
One thing that has tripped me up in the past is that while xvfb-run will create a virtual display, it can really screw up your screenshots and web interactions if it isn't sized correctly, so it's usually advisable to supply a resolution size which will suitably display your browser.