I'm running selenium grid through docker compose and targeting the different versions through my desired capabilities.
I have a requirement to test multiple browser versions however at the moment I need to target the full version i.e chrome versions "63.0.3239.132" or "64.0.3282.140"
I want to be able to specify just 63 or 64 etc so that my docker setup can update regularly without the need to update the code.
Is there a way to do this through desired capabilities?
below is my docker compose file
version: '2'
services:
seleniumhub:
image: selenium/hub:3.9.1-actinium
ports:
- 4444:4444
chrome64:
image: selenium/node-chrome-debug:3.9.1-actinium
depends_on:
- seleniumhub
environment:
HUB_PORT_4444_TCP_ADDR: seleniumhub
HUB_PORT_4444_TCP_PORT: 4444
ports:
- 5900
chrome63:
image: selenium/node-chrome-debug:3.8.1-erbium
depends_on:
- seleniumhub
environment:
HUB_PORT_4444_TCP_ADDR: seleniumhub
HUB_PORT_4444_TCP_PORT: 4444
ports:
- 5900
below is how i set up my desired capabilities.
ICapabilities caps = new DesiredCapabilities();
//generic Desktop Browser config':
if (DriverConfig.BrowserName != "")
{ ((DesiredCapabilities)caps).SetCapability("browserName", _browserName); }
else
{ }
if (DriverConfig.Version != "")
{ ((DesiredCapabilities)caps).SetCapability("version", _version); }
else
{ }
if (DriverConfig.Platform != "")
{ ((DesiredCapabilities)caps).SetCapability("platform", _platform); }
else
{ }
If you specify -browser browserName="chrome,version=63" in cmd line when register selenium node to sellenium hub, Then you can specify browerVersion: 63 in capabilities in test script.
So your problem dependent do you have chance to specify -browser in registry cmd when use docker setup grid?
Related
I'm having major problems getting Dapr up and running with my microservices. Every time I try to invoke another service, it returns a 500 error with the message
client error: the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection
The services and dapr sidecars are currently running in docker-compose on our dev machines but will run in Kubernetes when it is deployed properly.
When I look at the logs for the dapr containers in Docker for Windows, I can see the application being discovered on port 443 and a few initialisation messages but nothing else ever gets logged after that, even when I make my invoke request.
I have a container called clients, which I'm calling an API called test in it and this is then trying to call Microsoft's example weather forecast API in another container called simpleapi.
I'm using swaggerUI to call the apis. The test api returns 200 but when I put a breakpoint on the invoke, I can see the response is 500.
If I call the weatherforecast api directly using swaggerui, it returns a 200 with the expected payload.
I have the Dapr dashboard running in a container and it doesn't show any applications.
Docker-Compose.yml
version: '3.4'
services:
clients:
image: ${DOCKER_REGISTRY-}clients
container_name: "Clients"
build:
context: .
dockerfile: Clients/Dockerfile
ports:
- "50002:50002"
depends_on:
- placement
- database
networks:
- platform
clients-dapr:
image: "daprio/daprd:edge"
container_name: clients-dapr
command: [
"./daprd",
"-app-id", "clients",
"-app-port", "443",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002"
]
depends_on:
- clients
network_mode: "service:clients"
simpleapi:
image: ${DOCKER_REGISTRY-}simpleapi
build:
context: .
dockerfile: SimpleAPI/Dockerfile
ports:
- "50003:50003"
depends_on:
- placement
networks:
- platform
simpleapi-dapr:
image: "daprio/daprd:edge"
container_name: simpleapi-dapr
command: [
"./daprd",
"-app-id", "simpleapi",
"-app-port", "443",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50003"
]
depends_on:
- simpleapi
network_mode: "service:simpleapi"
placement:
image: "daprio/dapr"
container_name: placement
command: ["./placement", "-port", "50006"]
ports:
- "50006:50006"
networks:
- platform
dashboard:
image: "daprio/dashboard"
container_name: dashboard
ports:
- "8080:8080"
networks:
- platform
networks:
platform:
Test controller from the Clients API.
[Route("api/[controller]")]
[ApiController]
public class TestController : ControllerBase
{
[HttpGet]
public async Task<ActionResult> Get()
{
var httpClient = DaprClient.CreateInvokeHttpClient();
var response = await httpClient.GetAsync("https://simpleapi/weatherforecast");
return Ok();
}
}
This is a major new project for my company and it's looking like we're going to have to abandon Dapr and implement everything ourselves if we can't get this working soon.
I'm hoping there's some glaringly obvious problem here.
Actually turned out to be quite simple.
I needed to tell dapr to use ssl.
The clients-dapr needed the -app-ssl parameter so clients-dapr should have been as follows (the simpleapi-dapr needs the same param adding too)
clients-dapr:
image: "daprio/daprd:edge"
container_name: clients-dapr
command: [
"./daprd",
"-app-id", "clients",
"-app-port", "443",
"-app-ssl",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002"
]
depends_on:
- clients
network_mode: "service:clients"
you can run your service-specific port without docker and check dapr works as expected. you can specify http port & grpc port.
dapr run `
--app-id serviceName `
--app-port 5139 `
--dapr-http-port 3500 `
--dapr-grpc-port 50001 `
--components-path ./dapr-components
if the above setup works then you can setup with the docker. check above solution
I look this:nsq cannot consume message by connecting to nsqlookupd
But he doesn't apply to me,All sorts of ways have been tried.It could be the environment.
system: VMware CentOS + Docker-compose NSQ
Version: all latest
docker-compose.yml:
version: '3'
services:
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
ports:
- "4160:4160"
- "4161:4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160
depends_on:
- nsqlookupd
ports:
- "4150:4150"
- "4151:4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
depends_on:
- nsqlookupd
ports:
- "4171:4171"
I tried adding -broadcast-address=127.0.0.1 in NSQD command but it would cause an admins page error.
docker-compose config
services:
nsqadmin:
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
depends_on:
- nsqlookupd
image: nsqio/nsq
ports:
- 4171:4171/tcp
nsqd:
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 -broadcast-address=127.0.0.1
depends_on:
- nsqlookupd
image: nsqio/nsq
ports:
- 4150:4150/tcp
- 4151:4151/tcp
nsqlookupd:
command: /nsqlookupd
image: nsqio/nsq
ports:
- 4160:4160/tcp
- 4161:4161/tcp
version: '3.0'
I hope you understand what I mean,After all, my English is poor
Any idea for this?
func Producer() {
producer, err := nsq.NewProducer("192.168.132.128:4150", nsq.NewConfig())
if err != nil {
fmt.Println("NewProducer", err)
panic(err)
}
for i := 0; i < 5; i++ {
if err := producer.Publish("test", []byte(fmt.Sprintf("Hello World "))); err != nil {
fmt.Println("Publish", err)
panic(err)
}
}
}
this code is successful He can add messages to nsqd,but I can’t connecting to nsqd
look this:
2019/07/05 14:19:00 INF 2 [test/testq] querying nsqlookupd http://192.168.132.128:4161/lookup?topic=test
2019/07/05 14:19:00 INF 2 [test/testq] (60366475943f:4150) connecting to nsqd
2019/07/05 14:19:01 ERR 2 [test/testq] (60366475943f:4150) error connecting to nsqd - dial tcp: i/o timeout
and this
{"channels":["testq"],"producers":[{"remote_address":"172.19.0.2:57250","hostname":"60366475943f","broadcast_address":"60366475943f","tcp_port":4150,"http_port":4151,"version":"1.1.0"}]}
I think the problem arises in the lookup connection NSQ But I don't know how to deal with him.
i think you forgot something. Have you add this into your /etc/hosts
127.0.0.1 nsqd
It allow your machine to recognize the local network with nsq network on docker. Hope this can help you. CMIIW.
I tried to add as per below,
--broadcast-address=nsqd
So your docker compose should be like below,
nsqd:
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 --broadcast-address=nsqd
depends_on:
- nsqlookupd
image: nsqio/nsq
ports:
- 4150:4150/tcp
- 4151:4151/tcp
It should be working now
It seems to me like a network problem, this may not be the best solution for something different from local env, but this works great for me in testing process:
version: '3'
services:
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
network_mode: host
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=localhost:4160 # nsqlookupd:4160
depends_on:
- nsqlookupd
network_mode: host
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=localhost:4161 # nsqlookupd:4160
depends_on:
- nsqlookupd
network_mode: host
nsqfiles:
image: nsqio/nsq
command: /nsq_to_file --lookupd-http-address=localhost:4161 --topic=testing --output-dir=/tmp/nsq/logs
depends_on:
- nsqlookupd
network_mode: host
volumes:
- messages-queue:/tmp/nsq/logs
volumes:
messages-queue:
driver: local
I have the following docker-compose.yml which starts a chrome-standalone container and a nodejs application:
version: '3.7'
networks:
selenium:
services:
selenium:
image: selenium/standalone-chrome-debug:3
networks:
- selenium
ports:
- '4444:4444'
- '5900:5900'
volumes:
- /dev/shm:/dev/shm
user: '7777:7777'
node:
image: node_temp:latest
build:
context: .
target: development
args:
UID: '${USER_UID}'
GID: '${USER_GID}'
networks:
- selenium
env_file:
- .env
ports:
- '8090:8090'
volumes:
- .:/home/node
depends_on:
- selenium
command: >
sh -c 'yarn install &&
yarn dev'
I'm running the containers as follows:
docker-compose up -d selenium
docker-compose run --service-ports node sh
and starting the e2e from within the shell.
When running the e2e tests, selenium can be reached from the node container(through: http://selenium:4444), but node isn't reachable from the selenium container.
I have tested this by VNC'ing into the selenium container and pointing the browser to: http://node:8090. (The node container is reachable on the host however, through: http://localhost:8090).
I first thought that docker-compose run doesn't add the running container to the proper network, however by running docker network inspect test_app I get the following:
[
{
"Name": "test_app_selenium",
"Id": "df6517cc7b6446d1712b30ee7482c83bb7c3a9d26caf1104921abd6bbe2caf68",
"Created": "2019-06-30T16:08:50.724889157+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.31.0.0/16",
"Gateway": "172.31.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"8a76298b237790c62f80ef612debb021549439286ce33e3e89d4ee2f84de3aec": {
"Name": "test_app_node_run_78427bac2fd1",
"EndpointID": "04310bc4e564f831e5d08a0e07891d323a5953fa936e099d20e5e384a6053da8",
"MacAddress": "02:42:ac:1f:00:03",
"IPv4Address": "172.31.0.3/16",
"IPv6Address": ""
},
"ef087732aacf0d293a2cf956855a163a081fc3748ffdaa01c240bde452eee0fa": {
"Name": "test_app_selenium_1",
"EndpointID": "24a597e30a3b0b671c8b19fd61b9254bea9e5fcbd18693383d93d3df789ed895",
"MacAddress": "02:42:ac:1f:00:02",
"IPv4Address": "172.31.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "selenium",
"com.docker.compose.project": "test_app",
"com.docker.compose.version": "1.24.1"
}
}
]
Which shows both containers running on the "selenium" network. I'm not sure however if the node container is properly aliased on the network and if this is proper behaviour.
Am I missing some config here?
Seems like docker-compose run names the container differently to evade the service namespace as noted in docker-compose.yml. http://node:8090 was therefore not reachable.
I solved this by adding a --name flag as follows:
docker-compose run --service-ports --name node node sh
EDIT:
It took me a while to notice, but I was overcomplicating the implementation by a lot. The above docker-compose.yml can be simplified by adding host networking. This simply exposes all running containers on localhost and makes them reachable on localhost by their specified ports. Considering that I don't need any encapsulation (it's meant for dev), the following docker-compose.yml sufficed:
version: '3.7'
services:
selenium:
image: selenium/standalone-chrome:3
# NOTE: port definition is useless with network_mode: host
network_mode: host
user: '7777:7777'
node:
image: node_temp:latest
build:
context: .
target: development
args:
UID: '${USER_UID}'
GID: '${USER_GID}'
network_mode: host
env_file:
- .env
volumes:
- .:/home/node
command: >
sh -c 'yarn install &&
yarn dev'
I would like to add some e2e tests for my vue.js application and run them in the pipeline.
The corresponding part in my gitlab-ci.yml looks like this:
e2e:
image: node:8
before_script:
- npm install
services:
- name: selenium/standalone-chrome
alias: chrome
stage: testing
script:
- cd online-leasing-frontend
- npm install
- npm run test:e2e
And my nightwatch.js config:
{
"selenium": {
"start_process": false
},
"test_settings": {
"default": {
"selenium_port": 4444,
"selenium_host": "chrome"
}
}
}
Is “selenium_host”: “chrome” the correct way of setting the host to the selenium service?
I get the following error indicating that my e2e test can’t connect to the selenium service:
Connection refused! Is selenium server started?
Any tips?
The problem was that according to this issue, Gitlab CI is using the Kubernetes Executor instead of the Docker Executor which is mapping all Services to 127.0.0.1. After setting the selenium_host to this address, everything worked.
{
"selenium": {
"start_process": false
},
"test_settings": {
"default": {
"selenium_port": 4444,
"selenium_host": "127.0.0.1",
}
}
}
On the Selenium Repo it says:
"When executing docker run for an image with Chrome or Firefox please either mount -v /dev/shm:/dev/shm or use the flag --shm-size=2g to use the host's shared memory."
I don't know gitlab-ci so well, but I'm afraid it is not possible to add this as parameter to a service.
I am trying to switch from selenium to aerokube/selenoid.
Selenium following setup works:
version: '2.1'
services:
hub:
image: selenium/hub:2.53.0
ports:
- "4444:4444"
networks:
- default
browser0:
image: selenium/node-firefox-debug:2.53.0
ports:
- "5555"
environment:
SE_OPTS: '-log $PWD/logs/selenium-logs'
networks:
- default
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
- no_proxy=localhost
I try following selenoid setup:
version: '3'
services:
selenoid:
image: selenoid/vnc:firefox_53.0
network_mode: bridge
ports:
- "4444:4444"
volumes:
- ".:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
selenoid-ui:
image: aerokube/selenoid-ui
network_mode: bridge
links:
- selenoid
ports:
- "8080:8080"
command: ["--selenoid-uri", "http://selenoid:4444"]
It fails with Could not open connection: Notice: Undefined index: status in /ProjectPath/vendor/instaclick/php-webdriver/lib/WebDriver/AbstractWebDriver.php line 139 (Behat\Mink\Exception\DriverException)
the source code there is the following:
// if not success, throw exception
if ((int) $result['status'] !== 0) {
throw WebDriverException::factory($result['status'], $message);
}
When I var_dump($result);die;:
array(1) { ["value"]=> array(2) {
["sessionId"]=>
string(36) "20c829fa-7f73-45a5-b440-8a3282b4feea"
["capabilities"]=>
array(12) {
["acceptInsecureCerts"]=>
bool(false)
["browserName"]=>
string(7) "firefox"
["browserVersion"]=>
string(6) "55.0.1"
["moz:accessibilityChecks"]=>
bool(false)
["moz:processID"]=>
int(35)
["moz:profile"]=>
string(33) "/tmp/rust_mozprofile.BdIIDrRL7KKu"
["pageLoadStrategy"]=>
string(6) "normal"
["platformName"]=>
string(5) "linux"
["platformVersion"]=>
string(14) "3.16.0-4-amd64"
["rotatable"]=>
bool(false)
["specificationLevel"]=>
int(0)
["timeouts"]=>
array(3) {
["implicit"]=>
int(0)
["pageLoad"]=>
int(300000)
["script"]=>
int(30000)
}
} } }
So it does something.
Not sure what is the problem, any help would be appreciated.
My dog loves to eat and go outside, does yours too? How do you find stackoverflow turned out to be so limiting and I have to write additional stuff except code, what is this?
This error should reproduce with latest Firefox versions only (e.g. 53.0, 54.0 or 55.0) - all the rest should work. This is because browser images for these versions are using direct proxying to Geckodriver which is following W3C Selenium protocol specification starting from release 0.16.0. This specification has a bit different JSON exchange format than previous Selenium versions supported. So in order to fix this issue you just need to update your PHP Selenium client to the latest version supporting new format. Not sure about concrete version but e.g. for Java it should work starting from version 3.4.0.