UPDATE: Problem results from on-premise runner config, works on Gitlab.com; removed all misleading parts of the question
I have an acceptance test suite for an intranet server with a domain name. I use selenium docker images for executing chrome, codeception is local.
On Gitlab.com this works like a charm, but my local Gitlab Runner won't run correctly.
MWE: https://gitlab.com/jpmschuler/mwe-gitlab-codeception-selenium-docker
The selenium config part of the test suite:
config:
WebDriver:
host: 'selenium__standalone-chrome'
browser: 'chrome'
port: 4444
What doesn't work in Gitlab CI:
myjob:
stage: test
services:
- selenium/standalone-chrome:84.0
script:
- vendor/bin/codecept run -vvv --env visualRegression --fail-fast --steps
I curld selenium and the test system and got the same response as locally, thus rules out any host resolve problems.
Nevertheless the codeception test throws for image version 3.141:
[Facebook\WebDriver\Exception\SessionNotCreatedException] session not created
from disconnected: Unable to receive message from renderer (Session info: chrome=83.0.4103.61)
and for image version 4.0.0:
[Facebook\WebDriver\Exception\SessionNotCreatedException] Unable to create session for
\<CreateSessionRequest with Capabilities {browserName: chrome}>
I find the exceptions unsettling. Neither makes sense to me as it works locally with the same selenium images. Anybody a clue where to look?
The MWE at https://gitlab.com/jpmschuler/mwe-gitlab-codeception-selenium-docker works, so I went on debugging and found it is my On-Premise-Runner:
[[runners]]
name = "its-a-me-the-broken-runner"
url = "https://git.example.com/"
token = "2c2f60a2xxxxxxxxxxxxxxxxx"
executor = "docker"
environment = ["LC_ALL=en_US.UTF-8", "DOCKER_DRIVER=overlay2"]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.docker]
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
pull_policy = "if-not-present"
shm_size = 2097152
What could be the problem here?
Related
I like to use codeception acceptance test to test my PHP Application.
For this I have an acceptance.suite.yml configuration, like this:
class_name: AcceptanceTester
modules:
enabled:
- WebDriver:
url: "http://myserver"
window_size: false # disabled in ChromeDriver
port: 9515
browser: chrome
capabilities:
chromeOptions:
args: ["--headless", "--disable-gpu"] # Run Chrome in headless mode
prefs:
download.default_directory: "/tmp"
- Yii2:
part: orm
entryScript: index-test.php
cleanup: false
When I start the test codecept run I get the following error:
[ConnectionException] Can't connect to WebDriver at http://127.0.0.1:9515/wd/hub. Make sure that ChromeDriver, GeckoDriver or Selenium Server is running.
This is strange, because url is set to http://myserver, which is not localhost.
Question: Why codecept tries to use localhost instead of http://myserver?
I also changes the port to make sure, this config-file is really used. So I found out, that the config-file is used an the port: parameter in the acceptance.suite.yml really comes form this file. Only the url: parameter seams not to have an any effect.
Any idea?
Codeception connects to WebDriver daemon, e.g. Chromedriver, and then WebDriver connects to URL.
In your case it fails to connect to WebDriver.
Have you got Chromedriver running on your computer?
If it is running on different machine, you can specify it using hostparameter.
Here's the situation:
machineA has Protractor, Selenium, WebDriver, etc. installed.
machineB has all source code including tests and server running source code, but no Protractor, Selenium, etc
environment is Linux on both machines
How can I run the Protractor command on machineA so that it points to the spec on machineB?
ie. How to get a local Protractor command to point to a remote spec?
You need to provide seleniumAddress in your config file and make sure directConnect option is commented or value should be false.
Start webdriver on machineA through webdriver-manager start
exports.config = {
baseUrl: 'http://example.com/',
seleniumAddress: 'https://<machineA IP addess>:4444/wd/hub',
//directConnect:true,
...
}
I am currently working on a Java Spring Boot project which involves a classic backend/frontend architecture. I am trying to write some basic integration test by using the Selenium WebDriver.
The problem is that the tests I write pass without any problem on my local development machine but do not pass when I run them thorugh the continuous integration setup (Gitlab CI).
The code of the example test is the following:
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#ActiveProfiles("test")
public class ExampleTest {
#LocalServerPort
private int port;
WebDriver wd;
#Test
public void successfulLogin(){
String url = "http://localhost:" + port;
wd = new HtmlUnitDriver();
wd.manage().timeouts().pageLoadTimeout(30, TimeUnit.SECONDS);
wd.get(url);
}
}
The relative gitlab-ci.yml portion is:
stages:
- test
maven-test:
image: maven:3.5.0-jdk-8
stage: test
script: "mvn test -B -Dmaven.repo.local=/root/.m2/"
only:
- master
The CI has a single runner (version 9.5.0) with concurrency 1. It uses the docker executor with the docker:stable image. I don't know if it is needed, but it is running in priviledged mode.
When running the tests on the CI environment, they fail with the following error:
org.openqa.selenium.TimeoutException: java.net.SocketTimeoutException: Read timed out
I have tried both with url = "http://localhost:" + port and url = InetAddress.getLocalHost().getHostName() + ":" + port, both passed locally, none passed in the CI environment.
What am I doing wrong?
EDIT: Alfageme suggested a more accurate testing methodology. On the same server on which the CI is running, I cloned my repository with git clone and then run the following command:
sudo gitlab-runner exec docker maven-test
The test passed without any problem. I am really running out of ideas, does someone have any?
I am not exactly sure why, but clearing the various runner-XXXXX-project-21-concurrent-0-cache-XXXXXXXXXXXXXXXXXXX docker containers on the CI machine seemed to have solved the issue.
EDIT: This fixed the issue only momentarely. The problem happened again, this time clearing the cache voulmes did not help. Anyone has any suggestion?
I'm trying to get my test runner application completely Dockerized. I use the public hub and node images to create a Selenium Grid which works fine - I can run my tests locally against the Dockerized Grid. Now, all I need to do is Dockerize my test app code and run it against the Grid. I created a docker-compose file to setup the grid and then run the test code. Unfortunately, when the tests run from the Docker container they seem to be unable to connect to the hub. I checked the logs of the test runner container and I see some output from the first step of the test. It then hangs there for around a minute and outputs the following:
Net::ReadTimeout (Net::ReadTimeout)
I shelled into the docker test runner container and was able to ping the hub from there so I believe the test runner can talk to the hub. I specified my driver configuration like so:
Capybara.register_driver :remote_hub_chrome do |app|
caps = Selenium::WebDriver::Remote::Capabilities.chrome
caps.version = "59.0.3071.115"
caps.platform = "LINUX"
Capybara::Selenium::Driver.new(
app,
:browser => :chrome,
:url => "http://hub-container:4444/wd/hub",
:desired_capabilities => caps
)
end
As you can see, it will try to hit the hub-container domain, which it should be able to since I can ping it from within the container.
I do not see any log info on the browser node container so it seems like it wasn't even attempted to be reached. I am able to run the exact same test from my local machine outside of the docker container. Only difference is I have to change hub-container to localhost since I'm not running from within the container anymore.
Does anyone have any idea why I can't get the test to run from within a docker container?
Compose file:
version: "3"
services:
hub:
image: selenium/hub
ports:
- "4444:4444"
networks:
- ui-test
firefox:
image: selenium/node-firefox-debug
ports:
- "5900"
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
networks:
- ui-test
chrome:
image: selenium/node-chrome-debug
ports:
- "5900"
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
networks:
- ui-test
test-runner:
image: test-runner
depends_on:
- hub
- chrome
- firefox
networks:
- ui-test
networks:
ui-test:
driver: bridge
A lot of things can go wrong with such a complex setup. I currently made it work without the Grid, after many lost hours of debugging. Since you are posting Chrome setup, here is how I managed to make it run:
caps = Selenium::WebDriver::Remote::Capabilities.chrome(
'chromeOptions' => { 'args' =>
['--start-maximized', '--disable-infobars',
'--no-sandbox', '--whitelisted-ips'] }
)
So you should add those two '--no-sandbox', '--whitelisted-ips' in order to make the chromedriver binary to work with Docker/Remote setup. Also you can check if your binary actually has permissions via ls -la, if not try run chmod +x chromedriver and chmod 777 chromedriver (do the same for the geckodriver, which should be placed in user/bin according to Mozilla dos). If you still have issues with the later, You have to follow Mozilla docs:
"Even though the project has been renamed to GeckoDriver, many of the selenium clients look for the old name.
You need to rename the binary file to 'wires' (the old name) and ensure it is executable."
Last thing that can tell you, if there are problems with the driver executables is to run them as standalone, just got the their location (for geckodriver is /usr/bin) and start it like so ./geckodriver, the output should help you catch errors if such are present.
In case your nodes don't have displays - you need to use headless or xvfb setup, be sure to troubleshoot this as well. Display ports should be accessible too.
Update the :url option in your driver configuration to :url => "http://hub:4444/wd/hub". The hostname must match the name of the hub service defined in your compose file.
I've worked with Cucumber and Watir before and, when I wanted to run a test in firefox, I just created a new browser instance the following way:
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 60
$browser = Watir::Browser.new :firefox , profile: $profile, :http_client => client
Now I am using Behat, and when I want to run a test in Firefox, I have to define the Selenium parameters in behat.yml first:
Behat\MinkExtension:
base_url: 'http://test.com'
sessions:
default:
goutte: ~
javascript:
selenium2:
wd_host: http://localhost:4444/wd/hub
But I also have to run the Selenium standalone server in a separate terminal window:
java -jar selenium-server-standalone-2.44.0.jar
And leave it running in background.
My question is: is there any way to make Mink work like Watir? Launching a new browser instance when a test is run and killing it after it finishes without having to worry about running the service in background?