activeFocus very unreliable in TestCase - qml

How can I make QML TestCase behave more predictably when doing keyboard navigation tests, where I check various steps using compare() and the activeFocus of an element?
Right now, the tests are randomly passing and failing with no repeatability. So badly, in fact, that even THIS fails randomly:
import QtQuick 2.5
import QtTest 1.1
TestWindow {
TestCase {
when: windowShown
name: 'Example test'
property string myView: 'mainView.homeWrapper.mainView.myView'
// Some other tests, but should not matter for the below
function test_5_keyboard () {
var testView = getElement(myView)
var uiUpButton = findChild(testView, 'catalogUpButton')
uiUpButton.forceActiveFocus()
wait(500)
compare(uiUpButton.activeFocus, true)
}
}
}
Results randomly (but quite often, about 50 % of runs) in:
test-ui_1 | FAIL! : qmltestrunner::Example test::test_5_keyboard() Compared values are not the same
test-ui_1 | Actual (): false
test-ui_1 | Expected (): true
test-ui_1 | Loc: [/app/test/tst_MyTest.qml(100)]
The same happens in "real" tests too, where I don't use forceActiveFocus() but simply check for activeFocus of an element after moving the keyboard using for example keyClick(Qt.Key_Down). They often work, but again, fail randomly but too often to be usable.
The tests are run inside a container running Ubuntu 16.04, using these libraries:
RUN apt-get install -y qtcreator
# Install modules required by qml
RUN apt-get install -y \
qml-module-qtquick-localstorage \
qml-module-qt-websockets \
libqt5qml-graphicaleffects \
qt5-default \
qtdeclarative5-test-plugin
# Install XVFB virtual display
RUN apt-get install -y xvfb

Related

selenium-webdriver script isn't finding local file

I am unable to get a nodejs script loading Chrome to load a local file.
It works on 18.04 but not 22.04.
Has there been some significant change that would affect local file loading syntax, or is there something wrong in my code?
const { Builder } = require('selenium-webdriver')
async function start() {
const chrome = require('selenium-webdriver/chrome')
const options = new chrome.Options()
options.addArguments('--disable-dev-shm-usage')
options.addArguments('--no-sandbox')
const driver = new Builder()
.forBrowser('chrome')
.setChromeOptions(options)
.build()
await driver.get('file://' + __dirname + '/myfile.html')
await driver.sleep(10000)
const text = await driver.executeScript('return document.documentElement.innerText')
console.log(text)
driver.quit()
}
start()
The result is:
Your file couldn’t be accessed
It may have been moved, edited or deleted.
ERR_FILE_NOT_FOUND
I can confirm that myfile.html is definitely present.
Using console.log to show the file argument value shows it is the same for both older and newer Ubuntu.
Changing the driver.get argument to a website, e.g. https://www.google.com/ correctly shows the webpage content in the output.
The local file code fails as above using:
Ubuntu 22.04.1 LTS
Node v16.15.0
chromium-browser Chromium 108.0.5359.71 snap
It works fine on:
Ubuntu 18.04.6 LTS
Node v16.15.0
Chromium 107.0.5304.87 Built on Ubuntu , running on Ubuntu 18.04
This seems to be because the default Ubuntu 22.04 Chrome package is a snap package that limits local file access to /home/ .
Switching to the .deb distribution from Google solves the issue. The Chromedriver also needs to match.
# Chrome browser - .deb package, not the standard OS Snap which limits access to only /home/
# See: https://askubuntu.com/questions/1184357/why-cant-chromium-suddenly-access-any-partition-except-for-home
# See: https://www.ubuntuupdates.org/ppa/google_chrome?dist=stable
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
apt-get update
apt-get install google-chrome-stable
# Chromedriver, matching version number
# See: https://skolo.online/documents/webscrapping/#pre-requisites
# See step 3 at: https://tecadmin.net/setup-selenium-chromedriver-on-ubuntu/
# See: https://sites.google.com/chromium.org/driver/
#!# NB This will become out of sync if apt-get updates google-chrome-stable
google-chrome --version
wget https://chromedriver.storage.googleapis.com/108.0.5359.71/chromedriver_linux64.zip
unzip chromedriver_linux64.zip
sudo mv chromedriver /usr/bin/chromedriver
sudo chown root:root /usr/bin/chromedriver
sudo chmod +x /usr/bin/chromedriver
rm chromedriver_linux64.zip

Problems getting Singularity Compose to work

I wrote a small test project for Singularity Compose, consisting of a small server application, with the following YAML file:
version: "1.0"
instances:
server:
build:
context: ./server
recipe: server.recipe
ports:
- 9999:9999
When I call singularity-compose build, it successfully builds server.sif. Calling singularity-compose up also seemingly works without error, and calling singularity-compose ps results in something that looks just fine:
+ singularity-compose ps
INSTANCES NAME PID IMAGE
1 server 4176911 server.sif
However, the server application does not work, calling my test client results in it saying that there is no answer from the server.
But if I run server.sif directly without compose, everything works just fine.
Also, I tripple checked, my test application listens to port 9999, thus should be reachable from the outside.
What did I do wrong?
Edit:
I also checked whether there actually is any process listening at port 9999 by calling sudo lsof -i -P -n | grep LISTEN, this is not the case. Only when I manually start server.sif without compose it shows me the process listening.
Edit:
I went into the Singularity Compose shell and tried to start the Server application directly in there, just as a test, and it resulted in Permission denied. Not sure if that means anything.
Edit:
I now gave the application execution rights within the shell and called in there, this works. Am now trying to add execution rights in the recipe. If that works, it would be kind of strange, as the executable was build right there, and thus should already have execution rights.
Edit:
I added chmod +x in my recipe both after building Server and before executing it. Doesn't work either.
Also checked whether any bridges exist using brctl show, this is not the case.
Edit: My recipe, adjusted by the input of tsnowlan in his answer below:
Bootstrap: docker
From: ubuntu:20.04
%files
connection.cpp
connection.h
main.cpp
server.cpp
server.h
server.pro
%post
# get some basics
apt update
apt-get install -y wget
apt-get install -y software-properties-common
# get C++ compiler
apt-get install -y g++
apt-get install -y build-essential
apt-get install -y build-essential cmake
# get Qt
apt-get install -y qt5-default
# compile
qmake
make
ls
%runscript
/Server
%startscript
/Server
Again, note that the application works just fine both when compiled and startet normally and when started within a Singularity image (but without Singularity Compose).
The ls at the end of the %post block is used to verify that the Server application was build successfully.
Please share the server.recipe, as it is difficult to identify should be/is happening without it.
Without having that, my guess is that you have a %runscript in your definition file, but no %startscript. When the image is executed directly or via singularity run image.sif, the contents of %runscript determine what happens. To emulate the docker-compose style, the singularity images are started as persistent instances. In this case, the %startscript block determines what runs. If it is empty, it will just start up and sit there doing nothing. This would explain why when run by hand it works but not when using compose.

Jenkins: ChromeDriver update PATH from shell script and use new version

I am looking for a way to update the PATH in Jenkins for running Selenium tests with PyTest.
I need to run the latest version of chromedriver but due to an infrastructure deficiency (our base image is running Debian and the latest available version is 73 and I need to be running at least 83).
There is already a version of chromedriver installed on the image at /usr/bin and I need to be able to point to a different version
The jenkins chromedriver plugin appears that it will just use the latest available version for debian, which doesn't help me at all.
Until I have time to address the systemic issue, I'd like to just install chromedriver and update PATH - because Selenium requires chromedriver on the PATH
It seemed, for ease of use, that https://pypi.org/project/chromedriver-binary/ was a good solution - it installs just fine and the shell script chromedriver-path echoes the location, so I could just update path as the documentation shows: PATH=$PATH:chromedriver-path
This doesn't seem to jive in Jenkins - PATH is not updated
stages {
stage('build'){
steps {
withCredentials([...]) {
sh """
alias python=python3.8
python -m venv --system-site-packages venv # only for jenkins
python -u setup.py
. venv/bin/activate
which chromedriver #/usr/bin/chromedriver
chromedriver-path #path/to/python/lib/python3.8/site-packages/chromedriver_binary
export PATH=$PATH:`chromedriver-path`
which chromedriver #/usr/bin/chromedriver
"""
sh "python -m pytest"
}
}
}
}
I Have looked at the withEnv() option and environment{} step but I'm not sure how to access that binary and update PATH once chromedriver-binary has been set - because it appears that environment{} would not have access to shell scripts that are installed in the individual steps.
Any tips would be greatly appreciated
The issue may actually be in Jenkinsfile declaration.
Try using sh with single quotes '''. Also binaries are searched in directories defined in PATH from left to right, so to override system PATH, you must put your directory on the beginning, not on the end.
If I alter your code snippet:
stages {
stage('build'){
steps {
withCredentials([...]) {
sh '''
alias python=python3.8
python -m venv --system-site-packages venv # only for jenkins
python -u setup.py
. venv/bin/activate
which chromedriver #/usr/bin/chromedriver
chromedriver-path #path/to/python/lib/python3.8/site-packages/chromedriver_binary
export PATH=$(chromedriver-path):$PATH
echo $PATH # just to check the output, your path should be on the beginning
which chromedriver # this should now find the proper chromedriver
'''
sh "python -m pytest"
}
}
}
}

Unable to open X display when trying to run google-chrome on Centos (Rhel 7.5)

I need to run Google Chrome remotely on a virtual machine using SSH. I do not want xforwarding - I want to utilize the GPU available on the vm. When I try running google-chrome I get following error:
[19615:19615:0219/152933.751028:ERROR:browser_main_loop.cc(1512)] Unable to open X display.
I've tried to setting my DISPLAY env value to various values:
export DISPLAY=localhost:0.0
export DISPLAY=127.0.0.1:0.0
export DISPLAY=:0.0
I've also tried replacing 0.0 in abowe examples with different values.
I have ForwardX11 no in /etc/ssh/sshd_config
I tried setting up target like this:
systemctl isolate multi-user.target
When I try to run sudo lshw -C display i get folowing output:
*-display
description: VGA compatible controller
product: Hyper-V virtual VGA
vendor: Microsoft Corporation
physical id: 8
bus info: pci#0000:00:08.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: vga_controller bus_master rom
configuration: driver=hyperv_fb latency=0
resources: irq:11 memory:f8000000-fbffffff
*-display UNCLAIMED
description: VGA compatible controller
product: GM204GL [Tesla M60]
vendor: NVIDIA Corporation
physical id: 1
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list
configuration: latency=0
resources: iomemory:f0-ef iomemory:f0-ef memory:41000000-41ffffff memory:fe0000000-fefffffff memory:ff0000000-ff1ffffff
I've tried to update my gpu drivers by:
wget https://www.nvidia.com/content/DriverDownload-March2009/confirmation.php?url=/tesla/375.66/nvidia-diag-driver-local-repo-rhel7-375.66-1.x86_64.rpm
yum -y install nvidia-diag-driver-local-repo-rhel7-375.66-1.x86_64.rpm
But after that I still see UNCLIMED next to my NVIDIA gpu.
Aby ideas?
You can try with Xvfb. it does not require additional hardware.
Install Xvfb if you didn't install it yet and do the following steps.
sudo apt-get install -y xvfb
Dependencies to make "headless" chrome/selenium work:
sudo apt-get -y install xorg xvfb gtk2-engines-pixbuf
sudo apt-get -y install dbus-x11 xfonts-base xfonts-100dpi xfonts-75dpi xfonts-cyrillic xfonts-scalable
Optional but nifty: For capturing screenshots of Xvfb display:
sudo apt-get -y install imagemagick x11-apps
Make sure that Xvfb starts every time the box/vm is booted:
Xvfb -ac :99 -screen 0 1280x1024x16 &
export DISPLAY=:99
Run Google Chrome
google-chrome
Okay guys. I found my problem after 2 hours of going crazy. My box was configured correctly. What you can NOT do, is ssh from one box, to another box, to this box and expect X11 forwarding to play nicely. Without tearing apart the entire network, I found that if I shelled over from the MAIN box to this box ( no double or triple ssh'ing) chrome comes right up as a regular user using CLI. So it was a matter of multiple shells from multiple boxes that made the display say it was set to NOTHING! Setting the display manually only complicates the problems. Once I shelled directly over to this box from the main outside box, my display was set to 10:0, which is first instance in my configuration. Don't make this mistake, you will waste valuable time.
FWIW, I ran into this when using SSH to log into a Selenium chrome node in a Docker compose stack. Chrome would launch if I invoked it as root with sudo -u seluser google-chrome, but not if I logged in as seluser. The trick turned out to be that root had DISPLAY set to :99:0, and seluser didn't have it set at all. If I set it explicitly (either from a seluser shell or from the docker compose exec command line) it worked.
$ docker-compose exec -u seluser \
selenium-chrome \ # or whatever your service is called
/bin/bash
seluser#c02cda62b751:/$ export DISPLAY=:99:0
seluser#c02cda62b751:/$ google-chrome http://app.test:3000/home
or
$ docker-compose exec -u seluser -e DISPLAY=:99:0 \
selenium-chrome \
google-chrome http://app.test:3000/home
That :99.0 is undocumented, though, so if this isn't working, you might try checking root's DISPLAY value with:
docker-compose exec -u root selenium-chrome bash -c 'echo "${DISPLAY}"'
I faced the same issue with WSL and Ubuntu. I have unininstalled/Reset the ubuntu. After that, I have executed the below command
wsl --set-default-version 2
then I installed again the Ubuntu, I didn't get --no-sandbox issue or any issue.
Hope this will use for someone.

Deoplete doesn't work for neovim in ArchLimux

I'm trying to use neovim with deoplete on ArchLinux. Both requires Python support.
I've installed neovim plugin and python-neovim, python2-neovim extra plugins with pacman to use python.
This is my extra simple neovim configs:
call plug#begin('~/.vim/plugged')
if has('nvim')
Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' }
else
Plug 'Shougo/deoplete.nvim'
Plug 'roxma/nvim-yarp'
Plug 'roxma/vim-hug-neovim-rpc'
endif
let g:deoplete#enable_at_startup = 1
" Initialize plugin system
call plug#end()
But deoplete autocompletion doesn't work for me.
I've run :checkhealth and this is the result:
health#deoplete#check
========================================================================
## deoplete.nvim
- OK: exists("v:t_list") was successful
- OK: has("timers") was successful
- OK: has("python3") was successful
- OK: Python3.5+ was successful
- INFO: If you're still having problems, try the following commands:
$ export NVIM_PYTHON_LOG_FILE=/tmp/log
$ export NVIM_PYTHON_LOG_LEVEL=DEBUG
$ nvim
$ cat /tmp/log_{PID}
and then create an issue on github
Could someone explaine me what I'm doing wrong?
Fairly sure you need neovim-python or python-pynvim which is a community repo in the AUR.
After that just run pip3 install --user --upgrade pynvim and you ought to be settled.
edit: just noticed that the answer was suggested in the comments below, and that it is confirmed to be working. Nice!