I am trying to switch from selenium to aerokube/selenoid.
Selenium following setup works:
version: '2.1'
services:
hub:
image: selenium/hub:2.53.0
ports:
- "4444:4444"
networks:
- default
browser0:
image: selenium/node-firefox-debug:2.53.0
ports:
- "5555"
environment:
SE_OPTS: '-log $PWD/logs/selenium-logs'
networks:
- default
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
- no_proxy=localhost
I try following selenoid setup:
version: '3'
services:
selenoid:
image: selenoid/vnc:firefox_53.0
network_mode: bridge
ports:
- "4444:4444"
volumes:
- ".:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
selenoid-ui:
image: aerokube/selenoid-ui
network_mode: bridge
links:
- selenoid
ports:
- "8080:8080"
command: ["--selenoid-uri", "http://selenoid:4444"]
It fails with Could not open connection: Notice: Undefined index: status in /ProjectPath/vendor/instaclick/php-webdriver/lib/WebDriver/AbstractWebDriver.php line 139 (Behat\Mink\Exception\DriverException)
the source code there is the following:
// if not success, throw exception
if ((int) $result['status'] !== 0) {
throw WebDriverException::factory($result['status'], $message);
}
When I var_dump($result);die;:
array(1) { ["value"]=> array(2) {
["sessionId"]=>
string(36) "20c829fa-7f73-45a5-b440-8a3282b4feea"
["capabilities"]=>
array(12) {
["acceptInsecureCerts"]=>
bool(false)
["browserName"]=>
string(7) "firefox"
["browserVersion"]=>
string(6) "55.0.1"
["moz:accessibilityChecks"]=>
bool(false)
["moz:processID"]=>
int(35)
["moz:profile"]=>
string(33) "/tmp/rust_mozprofile.BdIIDrRL7KKu"
["pageLoadStrategy"]=>
string(6) "normal"
["platformName"]=>
string(5) "linux"
["platformVersion"]=>
string(14) "3.16.0-4-amd64"
["rotatable"]=>
bool(false)
["specificationLevel"]=>
int(0)
["timeouts"]=>
array(3) {
["implicit"]=>
int(0)
["pageLoad"]=>
int(300000)
["script"]=>
int(30000)
}
} } }
So it does something.
Not sure what is the problem, any help would be appreciated.
My dog loves to eat and go outside, does yours too? How do you find stackoverflow turned out to be so limiting and I have to write additional stuff except code, what is this?
This error should reproduce with latest Firefox versions only (e.g. 53.0, 54.0 or 55.0) - all the rest should work. This is because browser images for these versions are using direct proxying to Geckodriver which is following W3C Selenium protocol specification starting from release 0.16.0. This specification has a bit different JSON exchange format than previous Selenium versions supported. So in order to fix this issue you just need to update your PHP Selenium client to the latest version supporting new format. Not sure about concrete version but e.g. for Java it should work starting from version 3.4.0.
Related
I'm having major problems getting Dapr up and running with my microservices. Every time I try to invoke another service, it returns a 500 error with the message
client error: the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection
The services and dapr sidecars are currently running in docker-compose on our dev machines but will run in Kubernetes when it is deployed properly.
When I look at the logs for the dapr containers in Docker for Windows, I can see the application being discovered on port 443 and a few initialisation messages but nothing else ever gets logged after that, even when I make my invoke request.
I have a container called clients, which I'm calling an API called test in it and this is then trying to call Microsoft's example weather forecast API in another container called simpleapi.
I'm using swaggerUI to call the apis. The test api returns 200 but when I put a breakpoint on the invoke, I can see the response is 500.
If I call the weatherforecast api directly using swaggerui, it returns a 200 with the expected payload.
I have the Dapr dashboard running in a container and it doesn't show any applications.
Docker-Compose.yml
version: '3.4'
services:
clients:
image: ${DOCKER_REGISTRY-}clients
container_name: "Clients"
build:
context: .
dockerfile: Clients/Dockerfile
ports:
- "50002:50002"
depends_on:
- placement
- database
networks:
- platform
clients-dapr:
image: "daprio/daprd:edge"
container_name: clients-dapr
command: [
"./daprd",
"-app-id", "clients",
"-app-port", "443",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002"
]
depends_on:
- clients
network_mode: "service:clients"
simpleapi:
image: ${DOCKER_REGISTRY-}simpleapi
build:
context: .
dockerfile: SimpleAPI/Dockerfile
ports:
- "50003:50003"
depends_on:
- placement
networks:
- platform
simpleapi-dapr:
image: "daprio/daprd:edge"
container_name: simpleapi-dapr
command: [
"./daprd",
"-app-id", "simpleapi",
"-app-port", "443",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50003"
]
depends_on:
- simpleapi
network_mode: "service:simpleapi"
placement:
image: "daprio/dapr"
container_name: placement
command: ["./placement", "-port", "50006"]
ports:
- "50006:50006"
networks:
- platform
dashboard:
image: "daprio/dashboard"
container_name: dashboard
ports:
- "8080:8080"
networks:
- platform
networks:
platform:
Test controller from the Clients API.
[Route("api/[controller]")]
[ApiController]
public class TestController : ControllerBase
{
[HttpGet]
public async Task<ActionResult> Get()
{
var httpClient = DaprClient.CreateInvokeHttpClient();
var response = await httpClient.GetAsync("https://simpleapi/weatherforecast");
return Ok();
}
}
This is a major new project for my company and it's looking like we're going to have to abandon Dapr and implement everything ourselves if we can't get this working soon.
I'm hoping there's some glaringly obvious problem here.
Actually turned out to be quite simple.
I needed to tell dapr to use ssl.
The clients-dapr needed the -app-ssl parameter so clients-dapr should have been as follows (the simpleapi-dapr needs the same param adding too)
clients-dapr:
image: "daprio/daprd:edge"
container_name: clients-dapr
command: [
"./daprd",
"-app-id", "clients",
"-app-port", "443",
"-app-ssl",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002"
]
depends_on:
- clients
network_mode: "service:clients"
you can run your service-specific port without docker and check dapr works as expected. you can specify http port & grpc port.
dapr run `
--app-id serviceName `
--app-port 5139 `
--dapr-http-port 3500 `
--dapr-grpc-port 50001 `
--components-path ./dapr-components
if the above setup works then you can setup with the docker. check above solution
I couldn't find how to configure the Selenium Grid with Docker Compose.
How can I set for example the maxSessions in the docker-compose.yml for a Node?
I tried the following without success:
selenium-hub:
image: selenium/hub
networks:
- mynet
environment:
- MAX_SESSION=4 // DOES NOT WORK
- maxSession=4 // DOES NOT WORK
hostname: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome-debug
networks:
- mynet
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
- MAX_SESSION=4 // DOES NOT WORK
- maxSession=4 // DOES NOT WORK
...
you need to add this ENV under chrome:
NODE_MAX_SESSION=4 # As integer, maps to "maxSession"
read from here
I look this:nsq cannot consume message by connecting to nsqlookupd
But he doesn't apply to me,All sorts of ways have been tried.It could be the environment.
system: VMware CentOS + Docker-compose NSQ
Version: all latest
docker-compose.yml:
version: '3'
services:
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
ports:
- "4160:4160"
- "4161:4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160
depends_on:
- nsqlookupd
ports:
- "4150:4150"
- "4151:4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
depends_on:
- nsqlookupd
ports:
- "4171:4171"
I tried adding -broadcast-address=127.0.0.1 in NSQD command but it would cause an admins page error.
docker-compose config
services:
nsqadmin:
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
depends_on:
- nsqlookupd
image: nsqio/nsq
ports:
- 4171:4171/tcp
nsqd:
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 -broadcast-address=127.0.0.1
depends_on:
- nsqlookupd
image: nsqio/nsq
ports:
- 4150:4150/tcp
- 4151:4151/tcp
nsqlookupd:
command: /nsqlookupd
image: nsqio/nsq
ports:
- 4160:4160/tcp
- 4161:4161/tcp
version: '3.0'
I hope you understand what I mean,After all, my English is poor
Any idea for this?
func Producer() {
producer, err := nsq.NewProducer("192.168.132.128:4150", nsq.NewConfig())
if err != nil {
fmt.Println("NewProducer", err)
panic(err)
}
for i := 0; i < 5; i++ {
if err := producer.Publish("test", []byte(fmt.Sprintf("Hello World "))); err != nil {
fmt.Println("Publish", err)
panic(err)
}
}
}
this code is successful He can add messages to nsqd,but I can’t connecting to nsqd
look this:
2019/07/05 14:19:00 INF 2 [test/testq] querying nsqlookupd http://192.168.132.128:4161/lookup?topic=test
2019/07/05 14:19:00 INF 2 [test/testq] (60366475943f:4150) connecting to nsqd
2019/07/05 14:19:01 ERR 2 [test/testq] (60366475943f:4150) error connecting to nsqd - dial tcp: i/o timeout
and this
{"channels":["testq"],"producers":[{"remote_address":"172.19.0.2:57250","hostname":"60366475943f","broadcast_address":"60366475943f","tcp_port":4150,"http_port":4151,"version":"1.1.0"}]}
I think the problem arises in the lookup connection NSQ But I don't know how to deal with him.
i think you forgot something. Have you add this into your /etc/hosts
127.0.0.1 nsqd
It allow your machine to recognize the local network with nsq network on docker. Hope this can help you. CMIIW.
I tried to add as per below,
--broadcast-address=nsqd
So your docker compose should be like below,
nsqd:
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 --broadcast-address=nsqd
depends_on:
- nsqlookupd
image: nsqio/nsq
ports:
- 4150:4150/tcp
- 4151:4151/tcp
It should be working now
It seems to me like a network problem, this may not be the best solution for something different from local env, but this works great for me in testing process:
version: '3'
services:
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
network_mode: host
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=localhost:4160 # nsqlookupd:4160
depends_on:
- nsqlookupd
network_mode: host
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=localhost:4161 # nsqlookupd:4160
depends_on:
- nsqlookupd
network_mode: host
nsqfiles:
image: nsqio/nsq
command: /nsq_to_file --lookupd-http-address=localhost:4161 --topic=testing --output-dir=/tmp/nsq/logs
depends_on:
- nsqlookupd
network_mode: host
volumes:
- messages-queue:/tmp/nsq/logs
volumes:
messages-queue:
driver: local
I'm running selenium grid through docker compose and targeting the different versions through my desired capabilities.
I have a requirement to test multiple browser versions however at the moment I need to target the full version i.e chrome versions "63.0.3239.132" or "64.0.3282.140"
I want to be able to specify just 63 or 64 etc so that my docker setup can update regularly without the need to update the code.
Is there a way to do this through desired capabilities?
below is my docker compose file
version: '2'
services:
seleniumhub:
image: selenium/hub:3.9.1-actinium
ports:
- 4444:4444
chrome64:
image: selenium/node-chrome-debug:3.9.1-actinium
depends_on:
- seleniumhub
environment:
HUB_PORT_4444_TCP_ADDR: seleniumhub
HUB_PORT_4444_TCP_PORT: 4444
ports:
- 5900
chrome63:
image: selenium/node-chrome-debug:3.8.1-erbium
depends_on:
- seleniumhub
environment:
HUB_PORT_4444_TCP_ADDR: seleniumhub
HUB_PORT_4444_TCP_PORT: 4444
ports:
- 5900
below is how i set up my desired capabilities.
ICapabilities caps = new DesiredCapabilities();
//generic Desktop Browser config':
if (DriverConfig.BrowserName != "")
{ ((DesiredCapabilities)caps).SetCapability("browserName", _browserName); }
else
{ }
if (DriverConfig.Version != "")
{ ((DesiredCapabilities)caps).SetCapability("version", _version); }
else
{ }
if (DriverConfig.Platform != "")
{ ((DesiredCapabilities)caps).SetCapability("platform", _platform); }
else
{ }
If you specify -browser browserName="chrome,version=63" in cmd line when register selenium node to sellenium hub, Then you can specify browerVersion: 63 in capabilities in test script.
So your problem dependent do you have chance to specify -browser in registry cmd when use docker setup grid?
I'm trying to run Behat on Docker using selenium/hub, selenium/node-chrome-debug and selenium/node-firefox-debug images.
Running Behat with the Chrome node is working, but the Firefox node gives me the following error:
Could not open connection: Payload received from webdriver is valid but unexpected json: {"value":{"sessionId":"244f4715-c59b-4bfc-aa17-8f6a867ead83","capabilities":{"moz:profile":"/tmp/rust_mozprofile.u3mB4xKf6nVD","rotatable":false,"timeouts":{"implicit":0,"pageLoad":300000,"script":30000},"pageLoadStrategy":"normal","moz:headless":false,"moz:accessibilityChecks":false,"acceptInsecureCerts":false,"browserVersion":"57.0","platformVersion":"4.9.60-linuxkit-aufs","moz:processID":1005,"browserName":"firefox","platformName":"linux","moz:webdriverClick":false}}} (Behat\Mink\Exception\DriverException)
When I vnc to Firefox node, I see it opened Firefox, but nothing is happening.
My docker-compose.yml:
version: '3.2'
services:
site.local:
image: webdevops/php-apache-dev:7.1
ports:
- "8888:80"
volumes:
- ./public:/app
- .:/application
selenium-grid-hub.local:
image: selenium/hub
ports:
- "4445:4444"
selenium-node-chrome.local:
image: selenium/node-chrome-debug
environment:
- HUB_PORT_4444_TCP_ADDR=selenium-grid-hub.local
- HUB_PORT_4444_TCP_PORT=4444
ports:
- "5901:5900"
selenium-node-firefox.local:
image: selenium/node-firefox-debug
environment:
- HUB_PORT_4444_TCP_ADDR=selenium-grid-hub.local
- HUB_PORT_4444_TCP_PORT=4444
ports:
- "5902:5900"
My behat.yml:
default:
extensions:
Behat\MinkExtension:
base_url: "http://site.local"
goutte:
guzzle_parameters:
verify: false
suites:
mysuite:
paths: [ %paths.base%/features ]
contexts:
- Zstate\BehatSeleniumDockerSkeleton\Tests\Behat\Context\FeatureContext
chrome:
extensions:
Behat\MinkExtension:
selenium2:
browser: "chrome"
wd_host: http://selenium-grid-hub.local:4444/wd/hub
capabilities: {"browserName": "chrome", "browser": "chrome", 'chrome': {'switches':['--no-sandbox']}}
firefox:
extensions:
Behat\MinkExtension:
selenium2:
browser: "firefox"
wd_host: http://selenium-grid-hub.local:4444/wd/hub
capabilities: {"browserName": "firefox", "browser": "firefox"}
I created this small repo to replicate the issue.
I would greatly appreciate any help or advice. Please let me know if I miss something in my question so I can update it.
In case someone has the same error, this was what helped me.
In my case, the way to solve it was by establishing the following:
for chrome:
capabilities: {"extra_capabilities":{"chromeOptions":{"w3c":false}}}
More info here.